MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA

01.11.2014 Views

56 3.2 Linear Classifiers In this section, we will consider solely ways to provide a linear classification of data. Non-linear methods for classification such as ANN with backpropagation and Support Vector Machine will be covered later on, in these lecture notes. Linear Discriminant Analysis and the related Fisher's linear discriminant are methods to find the linear combination of features (projections of the data) that best separate two or more classes of objects. The resulting combination may be used as a linear classifier. We describe these next. 3.2.1 Linear Discriminant Analysis Linear Discriminant Analysis (LDA) combines concepts of PCA and clustering to determine projections along which a dataset can be best separated into two distinct classes. M× N Consider a data matrix composed of M N-dimensional data points, i.e. X ∈° . LDA aims to M× P i find a linear transformation A∈° that maps each column x of X, for i= 1,... M (these are N- dimensional vectors) into a corresponding q-dimensional vector y i . That is, ( ) i i i : N q A x ∈ → y = Ax ∈ q≤N ° ° (3.22) Let us further assume that the data in X is partitioned into K classes{ } , where each class k = 1 k C contains K k n data points and ∑ k = 1 n k = M . LDA aims to find the optimal transformation A such that the class structure of the original high-dimensional space is preserved in the low-dimensional space. In general, if each class is tightly grouped, but well separated from the other classes, the quality of the cluster is considered to be high. In discriminant analysis, two scatter matrices, called within-class ( S ) and between-class ( S w b ) matrices, are defined to quantify the quality of the clusters as follows: where k 1 1 µ = , µ = n N w K i k i k ∑∑( µ )( µ ) S = x − x − S ∑ ∑∑ k = 1 x ∈C k k x∈C x∈C k = 1 i k ( µ µ )( µ µ ) K k b = ∑ n k − k − k = 1 K k T T C k K (3.23) are, respectively, the mean of the k-th class and the global mean. An implicit assumption of LDA is that all classes have equal class covariance (otherwise, the elements of the within-class matrix should be normalized by the covariance on the set of data points of that class). © A.G.Billard 2004 – Last Update March 2011

57 When the transformation A is linear, as given by 2.40, solving the above problem consists of finding the optimum of J( A) T ASA b = , where represents the matrix determinant. T ASA w Equivalently, this reduces to maximizing trace( S ) and minimizing trace( S ). It is easy to see that trace( S ) measures the closeness of the vectors within the classes, while trace( w S ) w measures the separation between classes. Such an optimization problem is equivalent to an eigenvalue problem of the form Sx= λSx, λ ≠ 0. The solution can be obtained by performing eigenvalue decomposition b on the matrix w −1 b S S w if Sb is non singular or on b S S −1 w b if S is non singular. w There are at most K-1 eigenvectors corresponding to nonzero eigenvalues, since the rank of the matrix S is bounded from above by K-1. In effect, LDA requires that at least one of the two b matrices Sb, S be non-singular. The above problem is an intrinsic limitation of LDA and referred w to as the singularity problem that is, it fails when all scatter matrices are singular. If both matrices are singular, then, one can use extension of LDA using pseudo-inverse transformation. Another approach to deal with the singularity problem is to apply an intermediate dimension reduction stage using Principal Component Analysis (PCA) before LDA. w 3.2.2 Fisher Linear Discriminant The Fisher's linear discriminant is very similar to LDA in that it aims at determining a criterion for optimizing classification by increasing the between class similarity and decreasing the within class similarity. It however differs from LDA in that it does not assume that the classes are as normally distributed classes or equal class covariances. For two classes whose probability distribution functions have associated means 1 2 between and within-classes matrices are given by: S S w b µ , µ and covariance matrices ∑1, ∑ , then the 2 1 2 1 2 ( µ µ )( µ µ ) = − − =∑ +∑ 1 2 T (3.24) In the case where there are more than two classes (see example for LDA), the analysis used in the derivation of the Fisher discriminant can be extended to find a subspace which appears to contain all of the class variability. Then the within and between class matrices are given by: S S k = 1 ( µ µ )( µ µ ) K k w = ∑ n k − k − k = 1 b K ∑ = ∑ k T (3.25) © A.G.Billard 2004 – Last Update March 2011

57<br />

When the transformation A is linear, as given by 2.40, solving the above problem consists of<br />

finding the optimum of J( A)<br />

T<br />

ASA<br />

b<br />

= , where represents the matrix determinant.<br />

T<br />

ASA<br />

w<br />

Equivalently, this reduces to maximizing trace( S ) and minimizing trace( S ). It is easy to see<br />

that trace( S ) measures the closeness of the vectors within the classes, while trace(<br />

w<br />

S )<br />

w<br />

measures the separation between classes.<br />

Such an optimization problem is equivalent to an eigenvalue problem of the<br />

form Sx= λSx, λ ≠ 0. The solution can be obtained by performing eigenvalue decomposition<br />

b<br />

on the matrix<br />

w<br />

−1<br />

b<br />

S<br />

S<br />

w<br />

if<br />

Sb<br />

is non singular or on<br />

b<br />

S S<br />

−1<br />

w<br />

b<br />

if<br />

S is non singular.<br />

w<br />

There are at most K-1 eigenvectors corresponding to nonzero eigenvalues, since the rank of the<br />

matrix S is bounded from above by K-1. In effect, LDA requires that at least one of the two<br />

b<br />

matrices Sb,<br />

S be non-singular. The above problem is an intrinsic limitation of LDA and referred<br />

w<br />

to as the singularity problem that is, it fails when all scatter matrices are singular. If both matrices<br />

are singular, then, one can use extension of LDA using pseudo-inverse transformation. Another<br />

approach to deal with the singularity problem is to apply an intermediate dimension reduction<br />

stage using Principal Component Analysis (PCA) before LDA.<br />

w<br />

3.2.2 Fisher Linear Discriminant<br />

The Fisher's linear discriminant is very similar to LDA in that it aims at determining a criterion for<br />

optimizing classification by increasing the between class similarity and decreasing the within<br />

class similarity. It however differs from LDA in that it does not assume that the classes are as<br />

normally distributed classes or equal class covariances. For two classes whose probability<br />

distribution functions have associated means<br />

1 2<br />

between and within-classes matrices are given by:<br />

S<br />

S<br />

w<br />

b<br />

µ , µ and covariance matrices ∑1,<br />

∑ , then the<br />

2<br />

1 2 1 2<br />

( µ µ )( µ µ )<br />

= − −<br />

=∑ +∑<br />

1 2<br />

T<br />

(3.24)<br />

In the case where there are more than two classes (see example for LDA), the analysis used in<br />

the derivation of the Fisher discriminant can be extended to find a subspace which appears to<br />

contain all of the class variability. Then the within and between class matrices are given by:<br />

S<br />

S<br />

k = 1<br />

( µ µ )( µ µ )<br />

K<br />

k<br />

w<br />

= ∑ n<br />

k<br />

−<br />

k<br />

−<br />

k = 1<br />

b<br />

K<br />

∑<br />

= ∑<br />

k<br />

T<br />

(3.25)<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!