MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
20 Figure 2-2: An example of dimensionality reduction using PCA. The source images (left) are 32x32 color pixels. Each image corresponds to a 3072 dimensional vector. (center) the principal components shown in decreasing order of eigenvalue, notice how the first components contain the main features of the data (e.g. color of the balls) while the components further down only contain fine details. (right) projection of the source images onto the first two principal components. We are here faced with contradictory goals: On one hand, we should simplify the problem by reducing the dimension of the representation. On the other hand we want to preserve as much as possible of the original information content. PCA offers a convenient way to control the trade-off between loosing information and simplifying the problem at hand. As it will be noted later, it may be possible to create piecewise linear models by dividing the input data to smaller regions and fitting linear models locally to the data. 2.1.2 Solving PCA as an optimization under constraint problem The standard PCA procedure delineated in the previous paragraphs can also be rephrased as an optimization under constraint problem in two different ways. 2.1.2.1 Variance Maximization through constrained optimization Observe first that by projecting the original dataset onto the eigenvectors, PCA requires that the projection form an orthonormal basis. In addition, by projecting onto the eigenvector of the covariance matrix, it ensures that the first projection is along the direction of maximal variance of the datat. This can be formulated as an optimization problem that maximizes the following objective function: with C M 1 1 T T arg max J ( e ,..., e ) = arg max ( e ) x = ( e ) Ce j j∈ {1,... N} M i= 1 the covariance of the dataset (which is first made zero-mean). ∑ (2.7) N j i j j Adding the constraint that the e ( ) j k j e = 1, ∀ j = 1... q and e e = 0 ∀k ≠ j. T j should for an orthonormal basis PCA becomes an optimization under constraint problem which can be solved using Lagrange multipliers. PCA proceeds iteratively by first solving for the first eigenvector, using: T ( ) T ( ) ( ) λ 1 1 ( ) 1 1 1 1 1 Le = e Ce− − e e (2.8) where 1 λ is the first Lagrange multiplier, and then solving iteratively for all other eigenvectors, adding the orthogonality constraint. © A.G.Billard 2004 – Last Update March 2011
21 2.1.2.2 Reconstruction error minimization through constrained optimization Earlier on, we showed that PCA finds the optimal (in a mean-square sense) projections of the dataset. This, again, can be formalized as an optimization under constraint problem taking, that minimizes the following objective function: T i ( ex j ) 1 ( ,..., , ) q M q 1 i J e e λ = x −µ − λje M ∑ ∑ j (2.9) i= 1 j= 1 where λ = are the projection coefficients and xthe mean of the data. ij One optimizes J under the constraints that the eigenvectors form an orthonormal basis, i.e: j ( ) T i i e = 1 and e ⋅ e = 0, ∀ i, j = 1,..., q. 2.1.3 PCA limitations PCA is a simple, straightforward means of determining the major dimensions of a dataset. It suffers, however, from a number of drawbacks. The principal components found by projecting the dataset onto the perpendicular basis vectors (eigenvectors) are uncorrelated, and their directions orthogonal. The assumption that the referential is orthogonal is often too constraining, see Figure 2-3 for an illustration. Figure 2-3: Assume a set of data points whose joint distribution forms a parallelogram. The first PC is the direction with the greatest spread, along the longest axis of the parallelogram. The second PC is orthogonal to the first one, by necessity. The independent component directions are, however, parallel to the sides of the parallelogram. PCA ensures only uncorrelatedness. This is a less constraining condition than statistical independence, which makes standard PCA ill suited for dealing with non-Gaussian data. ICA is a method that specifically ensures statistical independence. © A.G.Billard 2004 – Last Update March 2011
- Page 1 and 2: SCHOOL OF ENGINEERING MACHINE LEARN
- Page 3 and 4: 3 4. 4 Regression Techniques ......
- Page 5 and 6: 5 9.2.2 Probability Distributions,
- Page 7 and 8: 7 Journals: • Machine Learning
- Page 9 and 10: 9 Performance What would be an opti
- Page 11 and 12: 11 1.2.3 Key features for a good le
- Page 13 and 14: 13 1.3.2 Crossvalidation To ensure
- Page 15 and 16: 15 In particular, we will consider
- Page 17 and 18: 17 2.1 Principal Component Analysis
- Page 19: 19 ( ) Xʹ′ = W X − µ (2.6) i
- Page 23 and 24: 23 PCA is an example of PP approach
- Page 25 and 26: 25 Algorithm: If one further assume
- Page 27 and 28: 27 The CCA algorithm consists thus
- Page 29 and 30: 29 Figure 2-6: Mixture of variables
- Page 31 and 32: 31 2.3.2 Why Gaussian variables are
- Page 33 and 34: 33 • In our general definition of
- Page 35 and 36: 35 2.3.5 ICA Ambiguities We cannot
- Page 37 and 38: 37 Denote by g the derivative of th
- Page 39 and 40: 39 3 Clustering and Classification
- Page 41 and 42: 41 An agglomerative clustering star
- Page 43 and 44: 43 3.1.1.1 The CURE Clustering Algo
- Page 45 and 46: 45 Disadvantages of hierarchical cl
- Page 47 and 48: 47 Cases where K-means might be vie
- Page 49 and 50: 49 3.1.4 Clustering with Mixtures o
- Page 51 and 52: 51 k ( σ j ) 2 = k ∑ i α = r k
- Page 53 and 54: 53 Theα are the so-called mixing c
- Page 55 and 56: 55 Figure 3-16: Clustering with 3 G
- Page 57 and 58: 57 When the transformation A is lin
- Page 59 and 60: 59 C: X → Y ( ) C x K = arg max
- Page 61 and 62: 61 Figure 3-18: Linear combination
- Page 63 and 64: 63 Figure 3-19: Bayes classificatio
- Page 65 and 66: 65 ⎛⎛ min ⎜⎜ w ⎝⎝ N i=
- Page 67 and 68: 67 T ( yi − xi w) 2 M ⎛⎛ ⎞
- Page 69 and 70: 69 Figure 4-2: Illustration of the
20<br />
Figure 2-2: An example of dimensionality reduction using PCA. The source images (left) are 32x32 color<br />
pixels. Each image corresponds to a 3072 dimensional vector. (center) the principal components shown in<br />
decreasing order of eigenvalue, notice how the first components contain the main features of the data (e.g.<br />
color of the balls) while the components further down only contain fine details. (right) projection of the source<br />
images onto the first two principal components.<br />
We are here faced with contradictory goals: On one hand, we should simplify the problem by<br />
reducing the dimension of the representation. On the other hand we want to preserve as much as<br />
possible of the original information content. PCA offers a convenient way to control the trade-off<br />
between loosing information and simplifying the problem at hand.<br />
As it will be noted later, it may be possible to create piecewise linear models by dividing the input<br />
data to smaller regions and fitting linear models locally to the data.<br />
2.1.2 Solving PCA as an optimization under constraint problem<br />
The standard PCA procedure delineated in the previous paragraphs can also be rephrased as an<br />
optimization under constraint problem in two different ways.<br />
2.1.2.1 Variance Maximization through constrained optimization<br />
Observe first that by projecting the original dataset onto the eigenvectors, PCA requires that the<br />
projection form an orthonormal basis. In addition, by projecting onto the eigenvector of the<br />
covariance matrix, it ensures that the first projection is along the direction of maximal variance of<br />
the datat.<br />
This can be formulated as an optimization problem that maximizes the following objective<br />
function:<br />
with<br />
C<br />
M<br />
1<br />
1<br />
T<br />
T<br />
arg max J ( e ,..., e ) = arg max ( e ) x = ( e ) Ce<br />
j j∈ {1,... N} M i=<br />
1<br />
the covariance of the dataset (which is first made zero-mean).<br />
∑ (2.7)<br />
N j i j j<br />
Adding the constraint that the e<br />
( )<br />
j k j<br />
e = 1, ∀ j = 1... q and e e = 0 ∀k ≠ j.<br />
T<br />
j<br />
should for an orthonormal basis<br />
PCA becomes an optimization under constraint problem which can be solved using Lagrange<br />
multipliers. PCA proceeds iteratively by first solving for the first eigenvector, using:<br />
T<br />
( )<br />
T<br />
( ) ( ) λ 1 1 ( )<br />
1 1 1 1 1<br />
Le = e Ce− − e e<br />
(2.8)<br />
where 1<br />
λ is the first Lagrange multiplier, and then solving iteratively for all other eigenvectors,<br />
adding the orthogonality constraint.<br />
© A.G.Billard 2004 – Last Update March 2011