Master Thesis - Department of Computer Science
Master Thesis - Department of Computer Science
Master Thesis - Department of Computer Science
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
x<br />
DNull<br />
DRange<br />
DP (x)<br />
DNull(x)<br />
DRange(x)<br />
LDA−based<br />
Eigenmodel<br />
LDA−based<br />
Eigenmodel<br />
¯<br />
DP (x)<br />
HNull(x)<br />
HRange(x)<br />
Sum<br />
Rule<br />
˜D(x)<br />
Maximum<br />
Membership<br />
Rule<br />
Crisp Class Label<br />
Figure 4.4: The fusion architecture <strong>of</strong> the proposed method <strong>of</strong> decision fusion.<br />
defined in Eqn. 4.29) is replaced by ¯<br />
DP (x), as:<br />
¯<br />
DP (x) =<br />
⎡<br />
⎢<br />
⎣ HNull(x)<br />
HRange(x)<br />
⎤<br />
⎥<br />
⎦ (4.40)<br />
where, H can be D or ¯ D based upon the success <strong>of</strong> the eigensubspaces constructed<br />
from LDA or nonparametric LDA-based eigenmodel. Figure 4.4 gives the diagram <strong>of</strong><br />
the proposed decision fusion technique.<br />
The general discussion on LDA and nonparametric LDA are given in the following:<br />
4.4.2.1 Linear Discriminant Analysis<br />
The objective in LDA [39] [35] (which is also known as Fisher’s Linear Discriminant)<br />
is to find the optimal projection such that the between-class scatter is maximized<br />
and within-class scatter is minimized. It is thus required to maximize the Fisher’s<br />
criterion (J) as shown below:<br />
J = tr(S −1<br />
w Sb) (4.41)<br />
where tr(.) is the trace <strong>of</strong> the matrix and Sb and Sw are the between-class and within-<br />
class scatter matrices, respectively. A within-class scatter matrix shows the scatter<br />
<strong>of</strong> samples around their respective class expected vectors, and is expressed by:<br />
C�<br />
Sw = PiE{(x − Mi)(x − Mi)<br />
i=1<br />
T C�<br />
|i} = PiΣi. (4.42)<br />
i=1<br />
Mi and Σi are mean and covariance matrix <strong>of</strong> i th class, respectively and C is total<br />
number <strong>of</strong> classes. Pi is the number <strong>of</strong> training sample for i th class and is equal to N<br />
87