MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA

01.11.2014 Views

20 Figure 2-2: An example of dimensionality reduction using PCA. The source images (left) are 32x32 color pixels. Each image corresponds to a 3072 dimensional vector. (center) the principal components shown in decreasing order of eigenvalue, notice how the first components contain the main features of the data (e.g. color of the balls) while the components further down only contain fine details. (right) projection of the source images onto the first two principal components. We are here faced with contradictory goals: On one hand, we should simplify the problem by reducing the dimension of the representation. On the other hand we want to preserve as much as possible of the original information content. PCA offers a convenient way to control the trade-off between loosing information and simplifying the problem at hand. As it will be noted later, it may be possible to create piecewise linear models by dividing the input data to smaller regions and fitting linear models locally to the data. 2.1.2 Solving PCA as an optimization under constraint problem The standard PCA procedure delineated in the previous paragraphs can also be rephrased as an optimization under constraint problem in two different ways. 2.1.2.1 Variance Maximization through constrained optimization Observe first that by projecting the original dataset onto the eigenvectors, PCA requires that the projection form an orthonormal basis. In addition, by projecting onto the eigenvector of the covariance matrix, it ensures that the first projection is along the direction of maximal variance of the datat. This can be formulated as an optimization problem that maximizes the following objective function: with C M 1 1 T T arg max J ( e ,..., e ) = arg max ( e ) x = ( e ) Ce j j∈ {1,... N} M i= 1 the covariance of the dataset (which is first made zero-mean). ∑ (2.7) N j i j j Adding the constraint that the e ( ) j k j e = 1, ∀ j = 1... q and e e = 0 ∀k ≠ j. T j should for an orthonormal basis PCA becomes an optimization under constraint problem which can be solved using Lagrange multipliers. PCA proceeds iteratively by first solving for the first eigenvector, using: T ( ) T ( ) ( ) λ 1 1 ( ) 1 1 1 1 1 Le = e Ce− − e e (2.8) where 1 λ is the first Lagrange multiplier, and then solving iteratively for all other eigenvectors, adding the orthogonality constraint. © A.G.Billard 2004 – Last Update March 2011

21 2.1.2.2 Reconstruction error minimization through constrained optimization Earlier on, we showed that PCA finds the optimal (in a mean-square sense) projections of the dataset. This, again, can be formalized as an optimization under constraint problem taking, that minimizes the following objective function: T i ( ex j ) 1 ( ,..., , ) q M q 1 i J e e λ = x −µ − λje M ∑ ∑ j (2.9) i= 1 j= 1 where λ = are the projection coefficients and xthe mean of the data. ij One optimizes J under the constraints that the eigenvectors form an orthonormal basis, i.e: j ( ) T i i e = 1 and e ⋅ e = 0, ∀ i, j = 1,..., q. 2.1.3 PCA limitations PCA is a simple, straightforward means of determining the major dimensions of a dataset. It suffers, however, from a number of drawbacks. The principal components found by projecting the dataset onto the perpendicular basis vectors (eigenvectors) are uncorrelated, and their directions orthogonal. The assumption that the referential is orthogonal is often too constraining, see Figure 2-3 for an illustration. Figure 2-3: Assume a set of data points whose joint distribution forms a parallelogram. The first PC is the direction with the greatest spread, along the longest axis of the parallelogram. The second PC is orthogonal to the first one, by necessity. The independent component directions are, however, parallel to the sides of the parallelogram. PCA ensures only uncorrelatedness. This is a less constraining condition than statistical independence, which makes standard PCA ill suited for dealing with non-Gaussian data. ICA is a method that specifically ensures statistical independence. © A.G.Billard 2004 – Last Update March 2011

20<br />

Figure 2-2: An example of dimensionality reduction using PCA. The source images (left) are 32x32 color<br />

pixels. Each image corresponds to a 3072 dimensional vector. (center) the principal components shown in<br />

decreasing order of eigenvalue, notice how the first components contain the main features of the data (e.g.<br />

color of the balls) while the components further down only contain fine details. (right) projection of the source<br />

images onto the first two principal components.<br />

We are here faced with contradictory goals: On one hand, we should simplify the problem by<br />

reducing the dimension of the representation. On the other hand we want to preserve as much as<br />

possible of the original information content. PCA offers a convenient way to control the trade-off<br />

between loosing information and simplifying the problem at hand.<br />

As it will be noted later, it may be possible to create piecewise linear models by dividing the input<br />

data to smaller regions and fitting linear models locally to the data.<br />

2.1.2 Solving PCA as an optimization under constraint problem<br />

The standard PCA procedure delineated in the previous paragraphs can also be rephrased as an<br />

optimization under constraint problem in two different ways.<br />

2.1.2.1 Variance Maximization through constrained optimization<br />

Observe first that by projecting the original dataset onto the eigenvectors, PCA requires that the<br />

projection form an orthonormal basis. In addition, by projecting onto the eigenvector of the<br />

covariance matrix, it ensures that the first projection is along the direction of maximal variance of<br />

the datat.<br />

This can be formulated as an optimization problem that maximizes the following objective<br />

function:<br />

with<br />

C<br />

M<br />

1<br />

1<br />

T<br />

T<br />

arg max J ( e ,..., e ) = arg max ( e ) x = ( e ) Ce<br />

j j∈ {1,... N} M i=<br />

1<br />

the covariance of the dataset (which is first made zero-mean).<br />

∑ (2.7)<br />

N j i j j<br />

Adding the constraint that the e<br />

( )<br />

j k j<br />

e = 1, ∀ j = 1... q and e e = 0 ∀k ≠ j.<br />

T<br />

j<br />

should for an orthonormal basis<br />

PCA becomes an optimization under constraint problem which can be solved using Lagrange<br />

multipliers. PCA proceeds iteratively by first solving for the first eigenvector, using:<br />

T<br />

( )<br />

T<br />

( ) ( ) λ 1 1 ( )<br />

1 1 1 1 1<br />

Le = e Ce− − e e<br />

(2.8)<br />

where 1<br />

λ is the first Lagrange multiplier, and then solving iteratively for all other eigenvectors,<br />

adding the orthogonality constraint.<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!