20.01.2013 Views

Master Thesis - Department of Computer Science

Master Thesis - Department of Computer Science

Master Thesis - Department of Computer Science

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

plored both <strong>of</strong> the spaces to capture and then combine discriminative directions for<br />

enhancing discriminability across classes. Our approach efficiently exploit both the<br />

spaces by combining them in two different levels: (i) Feature level and (ii) Decision<br />

level.<br />

For feature level fusion, we develop a dual space by combining the discriminative<br />

features from both range space and null space <strong>of</strong> within-class scatter matrix Sw.<br />

This allows us to utilize the whole set <strong>of</strong> discriminative directions present in an<br />

entire face space. As every face has a unique decomposition in null space and range<br />

space, we project all class means in both spaces to obtain two sets <strong>of</strong> projected<br />

means. Now each <strong>of</strong> these sets are used separately to search for the directions that<br />

discriminates them in that space. This step is equivalent to applying PCA on the<br />

set <strong>of</strong> projected means separately. These two eigenmodels are then combined using:<br />

1) Covariance Sum method and 2) Gramm-Schmidt Orthonormalization [42]. These<br />

methods construct a new set <strong>of</strong> directions integrating the information from both<br />

spaces. Then we reorder and select the best combination among those directions to<br />

obtain the best discriminability across classes. The feature reordering and selection<br />

is performed using two techniques: 1) Forward Selection and 2) Backward Selection<br />

[39] on a validation set, based on a class separability criterion.<br />

For decision level fusion, we extract two disjoint sets <strong>of</strong> optimal discriminatory<br />

basis separately from null space and range space to obtain two different classifiers.<br />

Then we combine the classifiers obtained on null space and range space using sum<br />

rule and product rule, two classical decision fusion techniques developed by Kittler<br />

[60],[59]. We also exploit each classifier separately using LDA and nonparametric<br />

LDA to enhance class separability at classifier response level and then combine them<br />

using sum rule. We denote the class scores provided by a classifier on a sample as<br />

response vector. Basically, we use response vectors as features vectors at decision level<br />

and employ LDA and nonparametric LDA to enhance class separability at classifier<br />

output space. Response vectors on a validation set (disjoint from training and testing<br />

sets <strong>of</strong> a database) is used as training data at decision level. Then the response<br />

vectors on testing set <strong>of</strong> the database are recalculated in the eigenmodel to improve<br />

combined classification accuracy.<br />

73

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!