20.01.2013 Views

Master Thesis - Department of Computer Science

Master Thesis - Department of Computer Science

Master Thesis - Department of Computer Science

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

the response vectors on a validation set to compare with DP(x) (decision pr<strong>of</strong>ile for a<br />

test sample x). It can be stated that, these methods use prior knowledge about the<br />

classifier’s behavior to improve combined performance. These methods will perform<br />

well when classifier output preserves class specific contextual information. In case<br />

if any classifier is highly sensitive to the features used and exhibits large intra-class<br />

variance in the classifier output space, these methods do not perform well, and even<br />

deteriorate beyond the performance <strong>of</strong> individual base classifiers.<br />

In that case, the only way to overcome this problem is to conditionally improve<br />

each row Di(x) (response vector <strong>of</strong> classifier Di) using response vectors <strong>of</strong> corre-<br />

sponding classifier on a validation set as training data at fusion level and then use<br />

class-conscious techniques to enhance the performance <strong>of</strong> classifier combination. Con-<br />

ditional row-wise operation here implies the use <strong>of</strong> training data (generated on a vali-<br />

dation set) as prior knowledge about classifier’s (Di) behavior to replace it’s response<br />

vector Di(x) with ¯ Di(x) in DP(x) and thereby improving combined performance.<br />

If any classifier’s output does not provide consistency in class-specific information,<br />

training data does not help to improve the final decision. In such a case, it is wiser<br />

to keep Di(x) unchanged for that classifier. This observation is the essence <strong>of</strong> our<br />

approach to influence the final classification performance.<br />

In our approach, we have accomplished the task <strong>of</strong> using training data (prior<br />

knowledge about a classifier) by using the response vectors on a validation set as<br />

training data at the fusion level for building a LDA or nonparametric LDA-based<br />

eigenmodel. Now for a particular classifier, the ability <strong>of</strong> the eigenmodel in enhancing<br />

class-separability in response vector eigenspace dictates the usefulness <strong>of</strong> training data<br />

at fusion level. The improvement <strong>of</strong> class-separability by eigenmodel strengthens<br />

the performance <strong>of</strong> it’s corresponding base classifier. This is evident from the fact<br />

that LDA performs well when the training data uniformly samples the underlying<br />

class distribution, or in other words training data represents class-specific information<br />

[86]. Empirically, we can validate it by observing that the use <strong>of</strong> eigenmodel on<br />

classifier’s output space provides better performance than the direct classifier output<br />

on a validation set <strong>of</strong> response vectors.<br />

To accomplish this task, we divide a database in four disjoint sets, namely train,<br />

104

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!