Master Thesis - Department of Computer Science
Master Thesis - Department of Computer Science
Master Thesis - Department of Computer Science
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
4.3.3 Algorithm for Feature Fusion<br />
In this section, we describe the overall algorithm <strong>of</strong> our proposed method <strong>of</strong> feature<br />
fusion from both range space and null space <strong>of</strong> within-class scatter. All null space<br />
based methods try to extract discriminative directions present in the null space and<br />
ignore range space with the view that it holds the complete intra-class variations. The<br />
performance <strong>of</strong> null space based methods is highly sensitive to the size <strong>of</strong> null space<br />
which in term depends on the original image size and also number <strong>of</strong> training samples.<br />
For large databases, huge number <strong>of</strong> training samples creates a negative effect on the<br />
performance <strong>of</strong> null space based methods. So, in the cases where 1) image size is<br />
small and 2) total number <strong>of</strong> training samples is large, most <strong>of</strong> the discriminative<br />
information goes to range space <strong>of</strong> within-class scatter. Our observation says that<br />
there is a nice performance balance between range space and null space. While<br />
image size is constant, total number <strong>of</strong> training samples controls the performance<br />
<strong>of</strong> both spaces. When number <strong>of</strong> training samples is less, null space performs far<br />
better than range space whereas increment in number <strong>of</strong> training samples enhances<br />
the performance <strong>of</strong> range space noticeably. So one way <strong>of</strong> capturing and utilizing the<br />
discriminative information present in entire face space is to merge the discriminative<br />
information present in both spaces. We have attempted to do the merging with the<br />
help <strong>of</strong> feature fusion. The steps <strong>of</strong> our algorithm is given below:<br />
1. Compute Sw from training set X = [x 1 1, x 1 2, ..., x 1 N, x 2 1, ..., x C N], using Eqn. 4.1.<br />
2. Perform eigen-analysis on Sw and select first r eigenvectors to form the basis<br />
vector set Q for range space V .<br />
3. Project all class means onto null space and range space using the basis vectors<br />
for range space only (see Eqn. 4.12-4.13). The sets <strong>of</strong> class means, projected<br />
on null space and range space, are denoted by XNull and XRange respectively.<br />
4. Compute the scatter matrices SNull and SRange <strong>of</strong> XNull and XRange.<br />
5. Perform eigen-analysis <strong>of</strong> SNull and SRange to obtain discriminative directions<br />
W Null<br />
opt<br />
and W Range<br />
opt<br />
in null space and range space separately.<br />
6. Perform feature fusion by applying either<br />
81