01.11.2014 Views

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

128<br />

variance of the output, e.g. by increasing the weights. Note that, by doing this, you increase the<br />

unpredictability of the neuron’s output, which can have disastrous consequences in some<br />

applications.<br />

Noise on the Inputs<br />

In most applications, you will have to face noise in the input, with different type of noise<br />

depending on the input. In such a scenario, the output of a neuron would be described by the<br />

following:<br />

( )<br />

∑ j j j<br />

(6.7)<br />

j<br />

y = w x + ν<br />

One can show that the mutual information between input and output in this case becomes:<br />

2<br />

1 σ<br />

y<br />

I( x, y)<br />

= log<br />

2 ⎛⎛ ⎞⎞<br />

2<br />

σ<br />

v ⎜⎜∑<br />

wj⎟⎟<br />

⎝⎝ j ⎠⎠<br />

2<br />

(6.8)<br />

In this case, it is not sufficient to just increase the weights, since, by doing so, one will also<br />

increase the amount on the denominator. More sophisticated techniques must be used on a<br />

neuron-by-neuron basis.<br />

More than one output neuron<br />

Imagine a two input-output scenario in which the two outputs attempt to jointly convey as much<br />

information as possible on the two inputs. In this case, each output’s neuron’s activation is given<br />

by:<br />

( )<br />

∑ (6.9)<br />

y = w x + ν<br />

i ij j i<br />

j<br />

Similarly to the 1-output case, we can assume that the noise terms are uncorrelated and<br />

Gaussian and we can write:<br />

2<br />

( ν) = ( ν1, ν2) = ( ν1) + ( ν2) = 1+<br />

2log( 2πσ<br />

)<br />

h h h h<br />

ν<br />

Since the output neurons are both dependent on the same two inputs, they are correlated. One<br />

can calculate the correlation matrix R as:<br />

T<br />

⎛⎛y1 ⎞⎞ ⎛⎛r11 r12<br />

⎞⎞<br />

R= E( yy ) = ⎜⎜ ⎟⎟( y1 y2)<br />

= ⎜⎜ ⎟⎟<br />

y r r<br />

⎝⎝ 2⎠⎠ ⎝⎝ 21 22⎠⎠<br />

(6.10)<br />

One can show that the mutual information is equal to<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!