20.01.2013 Views

Master Thesis - Department of Computer Science

Master Thesis - Department of Computer Science

Master Thesis - Department of Computer Science

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

two vectors can be expressed in the following way:<br />

d(yi, yj) = �yi − yj� 2 . (2.2)<br />

The desired class label for the probe image can be obtained by minimum mem-<br />

bership rule.<br />

L(xt) = arg min<br />

c rc. (2.3)<br />

• LDA (Linear Discriminant Analysis): The objective <strong>of</strong> LDA is to find the<br />

subspace that best discriminates different face classes by maximizing between-<br />

class scatter, while minimizing the within-class scatter. The eigenvectors cho-<br />

sen by LDA provide the best separation among the class distributions, while<br />

PCA selects eigenvectors which provide best representation <strong>of</strong> the overall sam-<br />

ple distribution. To illustrate the difference, Fig. 2.3 shows the first projection<br />

vector chosen by PCA and LDA for a two class problem. The eigenvectors for<br />

LDA can be obtained by computing the eigenvectors <strong>of</strong> S −1<br />

w Sb. Here, Sb and<br />

Sw are the between-class and within-class scatter matrices <strong>of</strong> training samples<br />

and are defined as:<br />

Sw =<br />

C�<br />

�<br />

i=1 xk∈Ci<br />

(xk − mi)(xk − mi) T , (2.4)<br />

C�<br />

Sb = ni(mi − m)(mi − m)<br />

i=1<br />

T . (2.5)<br />

where mi is the mean face for i th class and ni is the number <strong>of</strong> training samples<br />

in i th class. LDA subspace is spanned by a set <strong>of</strong> vectors W, which maximizes<br />

the criterion, J, defined as:<br />

J = tr(Sb)<br />

. (2.6)<br />

tr(Sw)<br />

W can be constructed by the eigenvectors <strong>of</strong> S −1<br />

w Sb. In most <strong>of</strong> the image pro-<br />

cessing applications, the number <strong>of</strong> training samples is usually less than the<br />

dimension <strong>of</strong> the sample space. This leads to the so-called small-sample-size<br />

(SSS) problem due to the singularity <strong>of</strong> the within-class scatter matrix. To<br />

overcome SSS problem, the following approaches are attempted: a two stage<br />

PCA+LDA approach [120], Fisherface method [12] and discriminant compo-<br />

nent analysis [143]. In all cases the higher dimension face data is projected<br />

14

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!