MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
42 Figure 3-3: Example of distance measurement in hierarchical clustering methods It is clear that the number and type of clusters will strongly depend on the choice of the distance metric and on the method used to merge the clusters. A typical measure of distance between two N-dimensional data points xy , takes the general form: p d ( x, y) = x −y N ∑ i i p (3.1) i= 1 The 1-norm distance, i.e. p=1, sometimes referred to as the Manhattan distance, because it is the distance a car would drive in a city laid out in square blocks (if there are no one-way streets). The 2-norm distance is the classical Euclidean distance. Figure 3-4 shows examples of data sets in which such nearest neighbor technique would fail. Failure to converge to a correct solution might occur, for instance, when the data points within a dataset are further apart than the two clusters. An even worse situation occurs when the clusters contain one another, as shown in Figure 3-4 right. In other words such a simple clustering technique works well only when the clusters are linearly separable. A solution to such a situation is to change coordinate system, e.g. using polar coordinates. However, determining the appropriate coordinate system remains a challenge in itself. Figure 3-4: Example of pairs of clusters, easy to see but awkward to extract for clustering algorithms © A.G.Billard 2004 – Last Update March 2011
43 3.1.1.1 The CURE Clustering Algorithm The CURE algorithm starts with each input point as a separate cluster and at each successive step merges the closest pair of clusters. In order to compute the distance between a pair of clusters, for each cluster, c representative points are stored. These are determined by first choosing c well scattered 2 points within the cluster by a fraction α. The distance between the two clusters is then the distance between the closest pair of representative points – one belonging to each of the two clusters. Thus, only the representative points of a cluster are used to compute its distance from other clusters. The c representative points attempt to capture the physical shape and geometry of the cluster. Furthermore, shrinking the scattered points toward the mean by a factor α gets rid of surface abnormalities and mitigates the effects of outliers. The reason for this is that outliers typically will be further away from the cluster center, and as a result, the shrinking would cause outliers to move more toward the center while the remaining representative points would experience minimal shifts. The larger movements in the outliers would thus reduce their ability to cause the wrong clusters to be merged. The parameter α can also be used to control the shapes of clusters. As smaller value of α, the scattered points get located closer to the means, and clusters tend to be more compact. Figure 3-5: Shrink factor toward centroid Figure 3-6: Representatives of clusters 2 In the first iteration, the point farthest from the mean is chosen as the first scattered point. Then, at each iteration, the point that is the farthest from the previously chosen scattered points is chosen. © A.G.Billard 2004 – Last Update March 2011
- Page 1 and 2: SCHOOL OF ENGINEERING MACHINE LEARN
- Page 3 and 4: 3 4. 4 Regression Techniques ......
- Page 5 and 6: 5 9.2.2 Probability Distributions,
- Page 7 and 8: 7 Journals: • Machine Learning
- Page 9 and 10: 9 Performance What would be an opti
- Page 11 and 12: 11 1.2.3 Key features for a good le
- Page 13 and 14: 13 1.3.2 Crossvalidation To ensure
- Page 15 and 16: 15 In particular, we will consider
- Page 17 and 18: 17 2.1 Principal Component Analysis
- Page 19 and 20: 19 ( ) Xʹ′ = W X − µ (2.6) i
- Page 21 and 22: 21 2.1.2.2 Reconstruction error min
- Page 23 and 24: 23 PCA is an example of PP approach
- Page 25 and 26: 25 Algorithm: If one further assume
- Page 27 and 28: 27 The CCA algorithm consists thus
- Page 29 and 30: 29 Figure 2-6: Mixture of variables
- Page 31 and 32: 31 2.3.2 Why Gaussian variables are
- Page 33 and 34: 33 • In our general definition of
- Page 35 and 36: 35 2.3.5 ICA Ambiguities We cannot
- Page 37 and 38: 37 Denote by g the derivative of th
- Page 39 and 40: 39 3 Clustering and Classification
- Page 41: 41 An agglomerative clustering star
- Page 45 and 46: 45 Disadvantages of hierarchical cl
- Page 47 and 48: 47 Cases where K-means might be vie
- Page 49 and 50: 49 3.1.4 Clustering with Mixtures o
- Page 51 and 52: 51 k ( σ j ) 2 = k ∑ i α = r k
- Page 53 and 54: 53 Theα are the so-called mixing c
- Page 55 and 56: 55 Figure 3-16: Clustering with 3 G
- Page 57 and 58: 57 When the transformation A is lin
- Page 59 and 60: 59 C: X → Y ( ) C x K = arg max
- Page 61 and 62: 61 Figure 3-18: Linear combination
- Page 63 and 64: 63 Figure 3-19: Bayes classificatio
- Page 65 and 66: 65 ⎛⎛ min ⎜⎜ w ⎝⎝ N i=
- Page 67 and 68: 67 T ( yi − xi w) 2 M ⎛⎛ ⎞
- Page 69 and 70: 69 Figure 4-2: Illustration of the
- Page 71 and 72: 71 4.4.2 Multi-Gaussian Case It is
- Page 73 and 74: 73 5 Kernel Methods These lecture n
- Page 75 and 76: 75 The kernel k provides a metric o
- Page 77 and 78: 77 M 1 T v = ∑ x ( x ) v M λ i j
- Page 79 and 80: 79 1 M The solutions to the dual ei
- Page 81 and 82: 81 5.4 Kernel CCA The linear versio
- Page 83 and 84: 83 additional ridge parameter induc
- Page 85 and 86: 85 Figure 5-3: TOP: Marginal (left)
- Page 87 and 88: 87 statistical independence. We def
- Page 89 and 90: 89 J j ( µ 1,...., µ K) = ∑∑
- Page 91 and 92: 91 A simple pattern recognition alg
43<br />
3.1.1.1 The CURE Clustering Algorithm<br />
The CURE algorithm starts with each input point as a separate cluster and at each successive<br />
step merges the closest pair of clusters. In order to compute the distance between a pair of<br />
clusters, for each cluster, c representative points are stored. These are determined by first<br />
choosing c well scattered 2 points within the cluster by a fraction α. The distance between the two<br />
clusters is then the distance between the closest pair of representative points – one belonging to<br />
each of the two clusters. Thus, only the representative points of a cluster are used to compute its<br />
distance from other clusters.<br />
The c representative points attempt to capture the physical shape and geometry of the cluster.<br />
Furthermore, shrinking the scattered points toward the mean by a factor α gets rid of surface<br />
abnormalities and mitigates the effects of outliers. The reason for this is that outliers typically will<br />
be further away from the cluster center, and as a result, the shrinking would cause outliers to<br />
move more toward the center while the remaining representative points would experience<br />
minimal shifts. The larger movements in the outliers would thus reduce their ability to cause the<br />
wrong clusters to be merged. The parameter α can also be used to control the shapes of clusters.<br />
As smaller value of α, the scattered points get located closer to the means, and clusters tend to<br />
be more compact.<br />
Figure 3-5: Shrink factor toward centroid<br />
Figure 3-6: Representatives of clusters<br />
2 In the first iteration, the point farthest from the mean is chosen as the first scattered point. Then, at each<br />
iteration, the point that is the farthest from the previously chosen scattered points is chosen.<br />
© A.G.Billard 2004 – Last Update March 2011