In this paper we show how the same mechanisms can be used within YCSc to identify clusters within a given dataset; the set pressure encourages the evolution of rules which cover many data points (via [[theta].sub.GA]) and the fitness pressure acts as a limit upon the separation of such data points, i.e., the error (via [upsilon]).
In this section we apply YCSc as described above on two datasets for the first experiment to test the performance of the system.
Figure 2 shows a typical example solutions produced by YCSc on both data sets.
YCSc's identification of the clusters is now clear.
In the less-separated case there is no significant difference in performance between YCSc and k-means.
We have begun to examine the performance of YCSc compared to k-means over randomly generated datasets in several d dimensions with varying numbers k clusters.
Table 1 shows how YCSc always gives superior quality and gives an equivalent or closer estimate of the number of clusters compared to k-means.
Thus far YCSc has struggled with the lessseparated data.
Figure 8 shows typical solutions produced by YCSc with local search using the fitness function in equation 6.