01.11.2014 Views

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

MACHINE LEARNING TECHNIQUES - LASA

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

194<br />

9.4.2 Conjugate Gradient descent<br />

The conjugate gradient method is an algorithm for finding the nearest local minimum of a function<br />

of n variables, which presupposes that the gradient of the function can be computed. It uses<br />

conjugate directions instead of the local gradient for going downhill. If the vicinity of the minimum<br />

has the shape of a long, narrow valley, the minimum is reached in far fewer steps than would be<br />

the case using the method of steepest descent.<br />

The conjugate gradient method is an effective method for symmetric positive definite systems. It<br />

is the oldest and best-known non-stationary method. The method proceeds by generating vector<br />

sequences of iterates (i.e., successive approximations to the solution), residuals corresponding to<br />

the iterates, and searches directions used in updating the iterates and residuals. Although the<br />

length of these sequences can become large, only a small number of vectors need to be kept in<br />

memory. At each iteration step of the method, two inner products are performed in order to<br />

compute update scalars that are defined to make the sequences satisfy specific orthogonality<br />

conditions. On a symmetric positive definite linear system these conditions imply that the distance<br />

to the true solution is minimized in some norm.<br />

© A.G.Billard 2004 – Last Update March 2011

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!