01.11.2014 Views

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

187<br />

The two most classical distribution probabilities are:<br />

• The Uniform distribution. The simplest forms is when x is a constant p( x) = a, ∀ x<br />

It can also be represented by a step function.<br />

• The Gaussian or Normal distribution p( x)<br />

1<br />

= e<br />

σ 2π<br />

⎛⎛<br />

⎜⎜−<br />

⎜⎜<br />

⎝⎝<br />

( x−µ<br />

) 2<br />

To emphasize that the pdf depends on the definition of the mean and covariance, one<br />

sometimes writes p( x µσ )<br />

| , .<br />

• The multi-dimensional Gaussian or Normal distribution has a pdf given by:<br />

2σ<br />

2<br />

⎞⎞<br />

⎟⎟<br />

⎟⎟<br />

⎠⎠<br />

( | µ , )<br />

p x<br />

∑ =<br />

( 2π<br />

)<br />

1<br />

N 1<br />

2 2<br />

T −<br />

( −( x−µ ) ∑ 1 ( x−µ<br />

))<br />

if x is N-dimensional, then<br />

µ is a N − dimensional mean vector<br />

∑ is a N×<br />

N covariance matrix<br />

∑<br />

e<br />

Figure 9-1: A 2-dimensional Gaussian distribution. The 2D oval-shape representation (left) is done by<br />

projecting the density onto the 2D axes of the variate X. The color code represent surface of 1 standard<br />

deviation (std) in red, 2 std in light blue, 3 std in darker blue, etc.<br />

9.2.4 Marginal Probability Distribution or Marginal Density<br />

Consider two random variables x and y with joint distribution p(x,y), then the marginal density of x<br />

is givenby:<br />

( ) ( , )<br />

px<br />

x = ∫ p x y dy<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!