MACHINE LEARNING TECHNIQUES - LASA
MACHINE LEARNING TECHNIQUES - LASA MACHINE LEARNING TECHNIQUES - LASA
36 2.3.6.1 Negentropy To obtain a measure of non-Gaussianity that is zero for a Gaussian variable and always nonnegative, one often uses a slightly modified version of the definition of differential entropy, called the negentropy. The negentropy J is defined as follows where y gauss ( ) ( ) ( ) J y = H y − H y (2.30) gauss is a Gaussian random variable of the same covariance matrix as y to the abovementioned properties. The negentropy is always non-negative, and it is zero if and only if y has a Gaussian distribution. Negentropy is invariant for invertible linear transformations. The advantage of using negentropy, or, equivalently, differential entropy, as a measure of nongaussianity is that it is well justified by statistical theory. In fact, negentropy is in some sense the optimal estimator of nongaussianity, as far as statistical properties are concerned. The problem in using negentropy is, however, that it is computationally very difficult. Estimating negentropy using the above definition would require an estimate (possibly nonparametric) of the pdf of the independent component. Therefore, simpler approximations of negentropy are very useful. For instance, Hyvarinen et al 2001 propose to use the following approximation. For a given non-linear quadratic function G, then: ( ) ∝⎡⎡ ( ) ( ) − ( ( Gauss )) J y ⎣⎣ E G y E G y T ( ) ∝⎡⎡ ( ) ( ) − ( Gauss ) 2 ( ) J y E G w x E G y ⎣⎣ ⎤⎤ ⎦⎦ ⎤⎤ ⎦⎦ 2 (2.31) 2.3.6.2 FastICA for one unit To begin with, we shall show the one-unit version of FastICA. By a "unit" we refer to a computational unit. As we will see in Section 6.7.2, this can also be considered as the output of a neuron. The FastICA learning rule determines a direction, i.e. a unit vector w such that the projection T wx maximizes nongaussianity. Nongaussianity is here measured by the approximation of the J w x . T empirical measure of Negentropy on the projection, i.e. ( ) T Recall that the variance of wxmust here be constrained to unity; for whitened data this is equivalent to constraining the norm of w to be unity. The FastICA is based on a fixed-point iteration scheme for finding a maximum of the T nongaussianity of wx. It can be also derived as an approximative Newton iteration, minimizing for the derivative of the Negentropy, i.e. either of the two following quadratic functions: 1 G1 ( y) = log ( cosh ( a⋅y) ) , 1 ≤ a ≤ 2 a 2 ( ) G y =−e − 1 2 y 2 ( = w T x) dJ y dw . Hyvarinen et al 2001 propose to use © A.G.Billard 2004 – Last Update March 2011
37 Denote by g the derivative of the above two functions, we have: ( ) = tanh ( ) g u au 1 1 2 ( ) g u = ue ⎛⎛ 2 u ⎞⎞ ⎜⎜ ⎜⎜ − ⎟⎟ 2 ⎟⎟ ⎝⎝ ⎠⎠ 1≤a ≤2is some suitable constant, often taken as a 1 =1. These function are monotonic where 1 and hence particularly well suited for performing gradient descent. The basic form of the FastICA algorithm is as follows: 1. Choose an initial (e.g. random) weight vector W. T 2. Compute the quantity ( ) T { } { ( )} w + = E xg w x − E g w x w 3. Proceed to a normalization of the weight vector: w = + w + w r −1 ⋅ ≠1 4. If the weights have not converged, i.e. wt ( ) wt ( ) r , go back to step 2. Note that it is not necessary that the vector converge to a single point, since w and -w define the same direction. Recall also that it is here assumed that the data have been whitened. 2.3.6.3 FastICA for several units The one-unit algorithm of the preceding subsection estimates just one of the independent components, or one projection pursuit direction. To estimate several independent components, we need to run the one-unit FastICA algorithm using several units (e.g. neurons) with weight w w . vectors 1 ,..., q To prevent different vectors from converging to the same maxima we must decorrelate the outputs T ,..., T w1 w at each iteration. We present here three methods for achieving this. q A simple way of achieving decorrelation is a deflation scheme based on a Gram-Schmidt-like decorrelation. This means that we estimate the independent components one by one. When we have estimated p independent components, or p vectors w ,..., 1 w p , we run the one-unit fixedpoint algorithm for w p + 1, and after every iteration step subtract from w p + 1 the ``projections'' + = of the previously estimated p vectors, and then renormalize w p + 1: T wp 1 wjwj, j 1,... p 1. Let p T p+ 1 = p+ 1−∑ p+ 1 j j j= 1 w w w w w 2. Let w = w / w w T p+ 1 p+ 1 p+ 1 p+ 1 (2.32) © A.G.Billard 2004 – Last Update March 2011
- Page 1 and 2: SCHOOL OF ENGINEERING MACHINE LEARN
- Page 3 and 4: 3 4. 4 Regression Techniques ......
- Page 5 and 6: 5 9.2.2 Probability Distributions,
- Page 7 and 8: 7 Journals: • Machine Learning
- Page 9 and 10: 9 Performance What would be an opti
- Page 11 and 12: 11 1.2.3 Key features for a good le
- Page 13 and 14: 13 1.3.2 Crossvalidation To ensure
- Page 15 and 16: 15 In particular, we will consider
- Page 17 and 18: 17 2.1 Principal Component Analysis
- Page 19 and 20: 19 ( ) Xʹ′ = W X − µ (2.6) i
- Page 21 and 22: 21 2.1.2.2 Reconstruction error min
- Page 23 and 24: 23 PCA is an example of PP approach
- Page 25 and 26: 25 Algorithm: If one further assume
- Page 27 and 28: 27 The CCA algorithm consists thus
- Page 29 and 30: 29 Figure 2-6: Mixture of variables
- Page 31 and 32: 31 2.3.2 Why Gaussian variables are
- Page 33 and 34: 33 • In our general definition of
- Page 35: 35 2.3.5 ICA Ambiguities We cannot
- Page 39 and 40: 39 3 Clustering and Classification
- Page 41 and 42: 41 An agglomerative clustering star
- Page 43 and 44: 43 3.1.1.1 The CURE Clustering Algo
- Page 45 and 46: 45 Disadvantages of hierarchical cl
- Page 47 and 48: 47 Cases where K-means might be vie
- Page 49 and 50: 49 3.1.4 Clustering with Mixtures o
- Page 51 and 52: 51 k ( σ j ) 2 = k ∑ i α = r k
- Page 53 and 54: 53 Theα are the so-called mixing c
- Page 55 and 56: 55 Figure 3-16: Clustering with 3 G
- Page 57 and 58: 57 When the transformation A is lin
- Page 59 and 60: 59 C: X → Y ( ) C x K = arg max
- Page 61 and 62: 61 Figure 3-18: Linear combination
- Page 63 and 64: 63 Figure 3-19: Bayes classificatio
- Page 65 and 66: 65 ⎛⎛ min ⎜⎜ w ⎝⎝ N i=
- Page 67 and 68: 67 T ( yi − xi w) 2 M ⎛⎛ ⎞
- Page 69 and 70: 69 Figure 4-2: Illustration of the
- Page 71 and 72: 71 4.4.2 Multi-Gaussian Case It is
- Page 73 and 74: 73 5 Kernel Methods These lecture n
- Page 75 and 76: 75 The kernel k provides a metric o
- Page 77 and 78: 77 M 1 T v = ∑ x ( x ) v M λ i j
- Page 79 and 80: 79 1 M The solutions to the dual ei
- Page 81 and 82: 81 5.4 Kernel CCA The linear versio
- Page 83 and 84: 83 additional ridge parameter induc
- Page 85 and 86: 85 Figure 5-3: TOP: Marginal (left)
36<br />
2.3.6.1 Negentropy<br />
To obtain a measure of non-Gaussianity that is zero for a Gaussian variable and always<br />
nonnegative, one often uses a slightly modified version of the definition of differential entropy,<br />
called the negentropy. The negentropy J is defined as follows<br />
where<br />
y<br />
gauss<br />
( ) ( ) ( )<br />
J y = H y − H y<br />
(2.30)<br />
gauss<br />
is a Gaussian random variable of the same covariance matrix as y to the abovementioned<br />
properties. The negentropy is always non-negative, and it is zero if and only if y has a<br />
Gaussian distribution. Negentropy is invariant for invertible linear transformations.<br />
The advantage of using negentropy, or, equivalently, differential entropy, as a measure of<br />
nongaussianity is that it is well justified by statistical theory. In fact, negentropy is in some sense<br />
the optimal estimator of nongaussianity, as far as statistical properties are concerned. The<br />
problem in using negentropy is, however, that it is computationally very difficult. Estimating<br />
negentropy using the above definition would require an estimate (possibly nonparametric) of the<br />
pdf of the independent component. Therefore, simpler approximations of negentropy are very<br />
useful. For instance, Hyvarinen et al 2001 propose to use the following approximation. For a<br />
given non-linear quadratic function G, then:<br />
( ) ∝⎡⎡<br />
( )<br />
( ) − ( ( Gauss ))<br />
J y<br />
⎣⎣<br />
E G y E G y<br />
T<br />
( ) ∝⎡⎡<br />
( )<br />
( ) − ( Gauss )<br />
2<br />
( )<br />
J y E G w x E G y<br />
⎣⎣<br />
⎤⎤<br />
⎦⎦<br />
⎤⎤<br />
⎦⎦<br />
2<br />
(2.31)<br />
2.3.6.2 FastICA for one unit<br />
To begin with, we shall show the one-unit version of FastICA. By a "unit" we refer to a<br />
computational unit. As we will see in Section 6.7.2, this can also be considered as the output of a<br />
neuron.<br />
The FastICA learning rule determines a direction, i.e. a unit vector w such that the projection<br />
T<br />
wx maximizes nongaussianity. Nongaussianity is here measured by the approximation of the<br />
J w x .<br />
T<br />
empirical measure of Negentropy on the projection, i.e. ( )<br />
T<br />
Recall that the variance of wxmust here be constrained to unity; for whitened data this is<br />
equivalent to constraining the norm of w to be unity.<br />
The FastICA is based on a fixed-point iteration scheme for finding a maximum of the<br />
T<br />
nongaussianity of wx. It can be also derived as an approximative Newton iteration, minimizing<br />
for the derivative of the Negentropy, i.e.<br />
either of the two following quadratic functions:<br />
1<br />
G1<br />
( y) = log ( cosh ( a⋅y)<br />
) , 1 ≤ a ≤ 2<br />
a<br />
2<br />
( )<br />
G y =−e<br />
−<br />
1 2<br />
y<br />
2<br />
( = w T x)<br />
dJ y<br />
dw<br />
. Hyvarinen et al 2001 propose to use<br />
© A.G.Billard 2004 – Last Update March 2011