20.01.2013 Views

Master Thesis - Department of Computer Science

Master Thesis - Department of Computer Science

Master Thesis - Department of Computer Science

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

face for each subject, and compare their performances. The performance <strong>of</strong><br />

the subband face representation on several linear subspace techniques: PCA,<br />

LDA, 2D-PCA, 2D-LDA and Discriminative Common Vectors (DCV) with<br />

Yale, ORL and PIE face databases show that the subband face based repre-<br />

sentation performs significantly better than the proposed multiresolution face<br />

recognition by Ekenel [36], for frontal face recognition in the presence <strong>of</strong> vary-<br />

ing illumination, expression and pose. Peak Recognition Accuracy (PRA) for<br />

all three databases are: 100% for Yale (using LDA), 95.32% for PIE (using<br />

DCV) and 91.67% for ORL (using 2D-LDA).<br />

• In chapter 4, we try to enhance classification performance by combining the<br />

discriminative features from both range space and null space <strong>of</strong> within-class<br />

scatter matrix Sw at (i) feature level and (ii) decision level.<br />

(i) Feature Fusion Strategy: Feature level information fusion allows us to<br />

utilize the whole set <strong>of</strong> discriminative directions present in an entire face space.<br />

As every face has a unique decomposition in null space and range space, we<br />

project all class means in both spaces to obtain two sets <strong>of</strong> projected means.<br />

Now each <strong>of</strong> these sets are used separately to search for the directions that<br />

discriminates them in that space. This step is equivalent to applying PCA<br />

separately on the set <strong>of</strong> projected means. These two eigenmodels are then<br />

combined using: 1) Covariance Sum and 2) Gramm-Schmidt Orthonormal-<br />

ization [42]. These methods construct a new set <strong>of</strong> directions integrating the<br />

information from both spaces. We then reorder and select the best combina-<br />

tion among those directions using two techniques: 1) Forward Selection and<br />

2) Backward Selection [39] on a validation set based on a class separability<br />

criterion to obtain the best discriminability across classes.<br />

(ii) Decision Fusion Strategy: For decision fusion, we extract two disjoint<br />

sets <strong>of</strong> optimal discriminatory basis separately from null space and range space<br />

<strong>of</strong> within-class scatter matrix Sw, to obtain two different classifiers. Then we<br />

combine the classifiers obtained on null space and range space using sum rule<br />

and product rule, the two classical decision fusion techniques developed by<br />

Kittler et al. [60],[59]. We also exploit each classifier space separately using<br />

119

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!