MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA

01.11.2014 Views

42 Figure 3-3: Example of distance measurement in hierarchical clustering methods It is clear that the number and type of clusters will strongly depend on the choice of the distance metric and on the method used to merge the clusters. A typical measure of distance between two N-dimensional data points xy , takes the general form: p d ( x, y) = x −y N ∑ i i p (3.1) i= 1 The 1-norm distance, i.e. p=1, sometimes referred to as the Manhattan distance, because it is the distance a car would drive in a city laid out in square blocks (if there are no one-way streets). The 2-norm distance is the classical Euclidean distance. Figure 3-4 shows examples of data sets in which such nearest neighbor technique would fail. Failure to converge to a correct solution might occur, for instance, when the data points within a dataset are further apart than the two clusters. An even worse situation occurs when the clusters contain one another, as shown in Figure 3-4 right. In other words such a simple clustering technique works well only when the clusters are linearly separable. A solution to such a situation is to change coordinate system, e.g. using polar coordinates. However, determining the appropriate coordinate system remains a challenge in itself. Figure 3-4: Example of pairs of clusters, easy to see but awkward to extract for clustering algorithms © A.G.Billard 2004 – Last Update March 2011

43 3.1.1.1 The CURE Clustering Algorithm The CURE algorithm starts with each input point as a separate cluster and at each successive step merges the closest pair of clusters. In order to compute the distance between a pair of clusters, for each cluster, c representative points are stored. These are determined by first choosing c well scattered 2 points within the cluster by a fraction α. The distance between the two clusters is then the distance between the closest pair of representative points – one belonging to each of the two clusters. Thus, only the representative points of a cluster are used to compute its distance from other clusters. The c representative points attempt to capture the physical shape and geometry of the cluster. Furthermore, shrinking the scattered points toward the mean by a factor α gets rid of surface abnormalities and mitigates the effects of outliers. The reason for this is that outliers typically will be further away from the cluster center, and as a result, the shrinking would cause outliers to move more toward the center while the remaining representative points would experience minimal shifts. The larger movements in the outliers would thus reduce their ability to cause the wrong clusters to be merged. The parameter α can also be used to control the shapes of clusters. As smaller value of α, the scattered points get located closer to the means, and clusters tend to be more compact. Figure 3-5: Shrink factor toward centroid Figure 3-6: Representatives of clusters 2 In the first iteration, the point farthest from the mean is chosen as the first scattered point. Then, at each iteration, the point that is the farthest from the previously chosen scattered points is chosen. © A.G.Billard 2004 – Last Update March 2011

43<br />

3.1.1.1 The CURE Clustering Algorithm<br />

The CURE algorithm starts with each input point as a separate cluster and at each successive<br />

step merges the closest pair of clusters. In order to compute the distance between a pair of<br />

clusters, for each cluster, c representative points are stored. These are determined by first<br />

choosing c well scattered 2 points within the cluster by a fraction α. The distance between the two<br />

clusters is then the distance between the closest pair of representative points – one belonging to<br />

each of the two clusters. Thus, only the representative points of a cluster are used to compute its<br />

distance from other clusters.<br />

The c representative points attempt to capture the physical shape and geometry of the cluster.<br />

Furthermore, shrinking the scattered points toward the mean by a factor α gets rid of surface<br />

abnormalities and mitigates the effects of outliers. The reason for this is that outliers typically will<br />

be further away from the cluster center, and as a result, the shrinking would cause outliers to<br />

move more toward the center while the remaining representative points would experience<br />

minimal shifts. The larger movements in the outliers would thus reduce their ability to cause the<br />

wrong clusters to be merged. The parameter α can also be used to control the shapes of clusters.<br />

As smaller value of α, the scattered points get located closer to the means, and clusters tend to<br />

be more compact.<br />

Figure 3-5: Shrink factor toward centroid<br />

Figure 3-6: Representatives of clusters<br />

2 In the first iteration, the point farthest from the mean is chosen as the first scattered point. Then, at each<br />

iteration, the point that is the farthest from the previously chosen scattered points is chosen.<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!