20.01.2013 Views

Master Thesis - Department of Computer Science

Master Thesis - Department of Computer Science

Master Thesis - Department of Computer Science

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

We try to enhance the performance <strong>of</strong> decision fusion by exploitation <strong>of</strong> individual<br />

classifier space on the basis <strong>of</strong> availability <strong>of</strong> contextual information. Contextual<br />

information refers to the fact that, when a sample <strong>of</strong> a particular subject is presented<br />

the similarity it shows for different classes is subject specific. This means that if the<br />

outputs <strong>of</strong> a classifier output for the samples <strong>of</strong> a specific class are well clustered and<br />

distinct from other clusters, there exists a scope <strong>of</strong> using this information to enhance<br />

the class separability at classifier output space in order to improve the individual<br />

classifier’s performance. This will in turn enhance the performance <strong>of</strong> fusion. Face<br />

and fingerprint are the two modalities to be combined in our case. Prior knowledge<br />

(training data at decision level) about each classifier space helps us to know that<br />

subject-wise contextual information is present in case <strong>of</strong> face but not for fingerprint.<br />

Fingerprint classifier is unable to show subject-wise contextual information because<br />

<strong>of</strong> it’s sensitivity to the number, occurrence and distribution <strong>of</strong> matched minutiae<br />

between two fingerprints.<br />

Hence in our approach, we try to improve the face space as much as possible<br />

and then combine with fingerprint. For enhancing the performance <strong>of</strong> face classifier,<br />

we use the classifier output (known as response vectors) on a validation set as the<br />

training data at fusion level for building up an LDA or nonparametric LDA-based<br />

eigenmodel and thereby enhancing class-separability.<br />

5.2 Theoretical Basis<br />

Let x ∈ R n be a feature vector and {1, 2, ......, C} be the set <strong>of</strong> C classes. A classifier<br />

is called a mapping:<br />

D : R n → [0, 1] C<br />

The output <strong>of</strong> a classifier D will be called as response vector and denoted by RD(x),<br />

a C dimensional vector, where:<br />

RD(x) = (r 1 D(x), r 2 D(x), ....., r C D(x)) , r i D(x) ∈ [0, 1] (5.1)<br />

The components {ri D (x)} can be regarded as estimates <strong>of</strong> the posterior probabilities<br />

<strong>of</strong> (dis)similarity or (dis)belief provided by classifier D, for all classes, for a sample x.<br />

102

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!