12.07.2015 Views

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

9.1. Discriminant <strong>Analysis</strong> 203Figure 9.2. Two data sets whose direction of separation is orthogonal to that ofthe first (within-group) PC.which there are large between-group differences. Such PCs therefore seemmore useful than those based on within-group covariance matrices, but thetechnique should be used with some caution, as it will work well only ifbetween-group variation dominates within-group variation.It is well known that, for two completely specified normal populations,differing only in mean, the probability of misclassification using the lineardiscriminant function is a monotonically decreasing function of the squaredMahalanobis distance ∆ 2 between the two populations (Rencher, 1998,Section 6.4). Here ∆ 2 is defined as∆ 2 =(µ 1 − µ 2 ) ′ Σ −1 (µ − µ 2 ). (9.1.1)Note that we meet a number of other varieties of Mahalanobis distance elsewherein the book. In equation (5.3.5) of Section 5.3, Mahalanobis distancebetween two observations in a sample is defined, and there is an obvious similaritybetween (5.3.5) and the definition given in (9.1.1) for Mahalanobisdistance between two populations. Further modifications define Mahalanobis

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!