MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
56 3.2 Linear Classifiers In this section, we will consider solely ways to provide a linear classification of data. Non-linear methods for classification such as ANN with backpropagation and Support Vector Machine will be covered later on, in these lecture notes. Linear Discriminant Analysis and the related Fisher's linear discriminant are methods to find the linear combination of features (projections of the data) that best separate two or more classes of objects. The resulting combination may be used as a linear classifier. We describe these next. 3.2.1 Linear Discriminant Analysis Linear Discriminant Analysis (LDA) combines concepts of PCA and clustering to determine projections along which a dataset can be best separated into two distinct classes. M× N Consider a data matrix composed of M N-dimensional data points, i.e. X ∈° . LDA aims to M× P i find a linear transformation A∈° that maps each column x of X, for i= 1,... M (these are N- dimensional vectors) into a corresponding q-dimensional vector y i . That is, ( ) i i i : N q A x ∈ → y = Ax ∈ q≤N ° ° (3.22) Let us further assume that the data in X is partitioned into K classes{ } , where each class k = 1 k C contains K k n data points and ∑ k = 1 n k = M . LDA aims to find the optimal transformation A such that the class structure of the original high-dimensional space is preserved in the low-dimensional space. In general, if each class is tightly grouped, but well separated from the other classes, the quality of the cluster is considered to be high. In discriminant analysis, two scatter matrices, called within-class ( S ) and between-class ( S w b ) matrices, are defined to quantify the quality of the clusters as follows: where k 1 1 µ = , µ = n N w K i k i k ∑∑( µ )( µ ) S = x − x − S ∑ ∑∑ k = 1 x ∈C k k x∈C x∈C k = 1 i k ( µ µ )( µ µ ) K k b = ∑ n k − k − k = 1 K k T T C k K (3.23) are, respectively, the mean of the k-th class and the global mean. An implicit assumption of LDA is that all classes have equal class covariance (otherwise, the elements of the within-class matrix should be normalized by the covariance on the set of data points of that class). © A.G.Billard 2004 – Last Update March 2011
57 When the transformation A is linear, as given by 2.40, solving the above problem consists of finding the optimum of J( A) T ASA b = , where represents the matrix determinant. T ASA w Equivalently, this reduces to maximizing trace( S ) and minimizing trace( S ). It is easy to see that trace( S ) measures the closeness of the vectors within the classes, while trace( w S ) w measures the separation between classes. Such an optimization problem is equivalent to an eigenvalue problem of the form Sx= λSx, λ ≠ 0. The solution can be obtained by performing eigenvalue decomposition b on the matrix w −1 b S S w if Sb is non singular or on b S S −1 w b if S is non singular. w There are at most K-1 eigenvectors corresponding to nonzero eigenvalues, since the rank of the matrix S is bounded from above by K-1. In effect, LDA requires that at least one of the two b matrices Sb, S be non-singular. The above problem is an intrinsic limitation of LDA and referred w to as the singularity problem that is, it fails when all scatter matrices are singular. If both matrices are singular, then, one can use extension of LDA using pseudo-inverse transformation. Another approach to deal with the singularity problem is to apply an intermediate dimension reduction stage using Principal Component Analysis (PCA) before LDA. w 3.2.2 Fisher Linear Discriminant The Fisher's linear discriminant is very similar to LDA in that it aims at determining a criterion for optimizing classification by increasing the between class similarity and decreasing the within class similarity. It however differs from LDA in that it does not assume that the classes are as normally distributed classes or equal class covariances. For two classes whose probability distribution functions have associated means 1 2 between and within-classes matrices are given by: S S w b µ , µ and covariance matrices ∑1, ∑ , then the 2 1 2 1 2 ( µ µ )( µ µ ) = − − =∑ +∑ 1 2 T (3.24) In the case where there are more than two classes (see example for LDA), the analysis used in the derivation of the Fisher discriminant can be extended to find a subspace which appears to contain all of the class variability. Then the within and between class matrices are given by: S S k = 1 ( µ µ )( µ µ ) K k w = ∑ n k − k − k = 1 b K ∑ = ∑ k T (3.25) © A.G.Billard 2004 – Last Update March 2011
- Page 5 and 6: 5 9.2.2 Probability Distributions,
- Page 7 and 8: 7 Journals: • Machine Learning
- Page 9 and 10: 9 Performance What would be an opti
- Page 11 and 12: 11 1.2.3 Key features for a good le
- Page 13 and 14: 13 1.3.2 Crossvalidation To ensure
- Page 15 and 16: 15 In particular, we will consider
- Page 17 and 18: 17 2.1 Principal Component Analysis
- Page 19 and 20: 19 ( ) Xʹ′ = W X − µ (2.6) i
- Page 21 and 22: 21 2.1.2.2 Reconstruction error min
- Page 23 and 24: 23 PCA is an example of PP approach
- Page 25 and 26: 25 Algorithm: If one further assume
- Page 27 and 28: 27 The CCA algorithm consists thus
- Page 29 and 30: 29 Figure 2-6: Mixture of variables
- Page 31 and 32: 31 2.3.2 Why Gaussian variables are
- Page 33 and 34: 33 • In our general definition of
- Page 35 and 36: 35 2.3.5 ICA Ambiguities We cannot
- Page 37 and 38: 37 Denote by g the derivative of th
- Page 39 and 40: 39 3 Clustering and Classification
- Page 41 and 42: 41 An agglomerative clustering star
- Page 43 and 44: 43 3.1.1.1 The CURE Clustering Algo
- Page 45 and 46: 45 Disadvantages of hierarchical cl
- Page 47 and 48: 47 Cases where K-means might be vie
- Page 49 and 50: 49 3.1.4 Clustering with Mixtures o
- Page 51 and 52: 51 k ( σ j ) 2 = k ∑ i α = r k
- Page 53 and 54: 53 Theα are the so-called mixing c
- Page 55: 55 Figure 3-16: Clustering with 3 G
- Page 59 and 60: 59 C: X → Y ( ) C x K = arg max
- Page 61 and 62: 61 Figure 3-18: Linear combination
- Page 63 and 64: 63 Figure 3-19: Bayes classificatio
- Page 65 and 66: 65 ⎛⎛ min ⎜⎜ w ⎝⎝ N i=
- Page 67 and 68: 67 T ( yi − xi w) 2 M ⎛⎛ ⎞
- Page 69 and 70: 69 Figure 4-2: Illustration of the
- Page 71 and 72: 71 4.4.2 Multi-Gaussian Case It is
- Page 73 and 74: 73 5 Kernel Methods These lecture n
- Page 75 and 76: 75 The kernel k provides a metric o
- Page 77 and 78: 77 M 1 T v = ∑ x ( x ) v M λ i j
- Page 79 and 80: 79 1 M The solutions to the dual ei
- Page 81 and 82: 81 5.4 Kernel CCA The linear versio
- Page 83 and 84: 83 additional ridge parameter induc
- Page 85 and 86: 85 Figure 5-3: TOP: Marginal (left)
- Page 87 and 88: 87 statistical independence. We def
- Page 89 and 90: 89 J j ( µ 1,...., µ K) = ∑∑
- Page 91 and 92: 91 A simple pattern recognition alg
- Page 93 and 94: 93 ( ) ( , ) f x = sign w x + b (5.
- Page 95 and 96: 95 Figure 5-6: A binary classificat
- Page 97 and 98: 97 where N is the number of support
- Page 99 and 100: 99 5.8 Support Vector Regression In
- Page 101 and 102: 101 The optimization problem given
- Page 103 and 104: 103 Note that since we never have t
- Page 105 and 106: 105 Figure 5-13: Effect of the kern
57<br />
When the transformation A is linear, as given by 2.40, solving the above problem consists of<br />
finding the optimum of J( A)<br />
T<br />
ASA<br />
b<br />
= , where represents the matrix determinant.<br />
T<br />
ASA<br />
w<br />
Equivalently, this reduces to maximizing trace( S ) and minimizing trace( S ). It is easy to see<br />
that trace( S ) measures the closeness of the vectors within the classes, while trace(<br />
w<br />
S )<br />
w<br />
measures the separation between classes.<br />
Such an optimization problem is equivalent to an eigenvalue problem of the<br />
form Sx= λSx, λ ≠ 0. The solution can be obtained by performing eigenvalue decomposition<br />
b<br />
on the matrix<br />
w<br />
−1<br />
b<br />
S<br />
S<br />
w<br />
if<br />
Sb<br />
is non singular or on<br />
b<br />
S S<br />
−1<br />
w<br />
b<br />
if<br />
S is non singular.<br />
w<br />
There are at most K-1 eigenvectors corresponding to nonzero eigenvalues, since the rank of the<br />
matrix S is bounded from above by K-1. In effect, LDA requires that at least one of the two<br />
b<br />
matrices Sb,<br />
S be non-singular. The above problem is an intrinsic limitation of LDA and referred<br />
w<br />
to as the singularity problem that is, it fails when all scatter matrices are singular. If both matrices<br />
are singular, then, one can use extension of LDA using pseudo-inverse transformation. Another<br />
approach to deal with the singularity problem is to apply an intermediate dimension reduction<br />
stage using Principal Component Analysis (PCA) before LDA.<br />
w<br />
3.2.2 Fisher Linear Discriminant<br />
The Fisher's linear discriminant is very similar to LDA in that it aims at determining a criterion for<br />
optimizing classification by increasing the between class similarity and decreasing the within<br />
class similarity. It however differs from LDA in that it does not assume that the classes are as<br />
normally distributed classes or equal class covariances. For two classes whose probability<br />
distribution functions have associated means<br />
1 2<br />
between and within-classes matrices are given by:<br />
S<br />
S<br />
w<br />
b<br />
µ , µ and covariance matrices ∑1,<br />
∑ , then the<br />
2<br />
1 2 1 2<br />
( µ µ )( µ µ )<br />
= − −<br />
=∑ +∑<br />
1 2<br />
T<br />
(3.24)<br />
In the case where there are more than two classes (see example for LDA), the analysis used in<br />
the derivation of the Fisher discriminant can be extended to find a subspace which appears to<br />
contain all of the class variability. Then the within and between class matrices are given by:<br />
S<br />
S<br />
k = 1<br />
( µ µ )( µ µ )<br />
K<br />
k<br />
w<br />
= ∑ n<br />
k<br />
−<br />
k<br />
−<br />
k = 1<br />
b<br />
K<br />
∑<br />
= ∑<br />
k<br />
T<br />
(3.25)<br />
© A.G.Billard 2004 – Last Update March 2011