Keywords: ant colony optimization,

K-nearest neighbor, features selection, heuristic, pheromone.

The

k-nearest neighbor algorithm selects the k closest examples in order to classify new instances

The main objective of

k-nearest neighbor classifier is to discover set of k objects in the training set that are similar to the objects in the test group.

The

k-nearest neighbor algorithm is amongst the simplest method of all machine learning algorithms: an object is classified by a majority vote of its neighbors, with the object being assigned to the class most common amongst its k nearest neighbors (k is a positive integer, typically small).

In this work, Valence and arousal have been categorized and relationship between GSR signals, arousal and valence has been studied using Decision Tree, Random Forest,

k-Nearest Neighbor and Support Vector Machine algorithms.

Authors first compared different classifiers such as Decision Tree,

k-Nearest Neighbor, Bayes Network and Naive-Bayes classifiers and after comparison authors used only Decision Tree classifier for its ability to provide good balancing ratio between accuracy and complexity.

The tests were carried out with

K-nearest neighbor (K-NN), which gives the highest classification value.

The training samples are formed by means of

K-nearest neighbor approach (KNNA) based on PSRT.

In this study three classifiers were utilized, namely support vector machine, naive bayes and

K-nearest neighbor.

A hybrid text classification approach with low dependency on parameter by integrating

K-nearest neighbor and support vector machine.

K-Nearest Neighbor algorithms get a similar fine treatment with note taken of scaling problems and explanations of each of the more commonly used distance metrics.

The most employed prediction method for chaotic hydrological time series is the

k-nearest neighbor (k-NN), which was used in this study (Elshorbagy et al.