12.07.2015 Views

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

404 14. Generalizations and Adaptations of <strong>Principal</strong> <strong>Component</strong> <strong>Analysis</strong>compose var( ∑ pj=1 x j) into parts due to each component. In this respectthere are similarities with a method proposed by Vermeiren et al. (2001),which they call extended principal component analysis. This also decomposesvar( ∑ pj=1 x j), but does so with rescaled PCs. Denote the kth suchcomponent by zkE = aE′ k x, where aE k= c ka k , a k is the usual vector ofcoefficients for the kth PC with a ′ k a k =1,andc k is a rescaling constant.Vermeiren et al. (2001) stipulate that ∑ pk=1 zE k = ∑ pj=1 x j and then showthat this condition is satisfied by c = A ′ 1 p , where the kth column of Ais a k and the kth element of c is c k .Thusc k is the sum of the coefficientsin a k and will be large when all coefficients in a k are of the same sign,or when a PC is dominated by a single variable. The importance of suchPCs is enhanced by the rescaling. Conversely, c k is small for PCs that arecontrasts between groups of variables, and rescaling makes these componentsless important. The rescaled or ‘extended’ components are, like theunscaled PCs z k , uncorrelated, so that[ ∑p ] [ ∑p ] p∑p∑p∑var x j =var = var(zk E )= c 2 k var(z k )= c 2 kl k .j=1zkEk=1k=1Hence var[ ∑ pj=1 x j] may be decomposed into contributions c 2 k l k,k=1, 2,...,p from each rescaled component. Vermeiren et al. (2001) suggestthat such a decomposition is relevant when the variables are constituentsof a financial portfolio.14.6.4 Subjective <strong>Principal</strong> <strong>Component</strong>sKorhonen (1984) proposes a technique in which a user has input into theform of the ‘components.’ The slightly tenuous link with PCA is that it isassumed that the user wishes to maximize correlation between the chosencomponent and one or more of the original variables. The remarks followingthe spectral decomposition (Property A3) in Section 2.1, Property A6 inSection 2.3, and the discussion of different normalization constraints at theend of that section, together imply that the first few PCs tend to havelarge correlations with the variables, especially in a correlation matrixbasedPCA. Korhonen’s (1984) procedure starts by presenting the userwith the correlations between the elements of x and the ‘component’ a ′ 0x,where a 0 is the isometric vector √ 1p(1, 1,...,1) (see Section 13.2). The useris then invited to choose a variable for which the correlation is desiredto be larger. The implications for other correlations of modifying a 0 soas to increase the selected correlation are displayed graphically. On thebasis of this information, the user then chooses by how much to increasethe correlation and hence change a 0 , giving the first subjective principalcomponent.If second, third, ..., subjective components are desired, emphasizingcorrelations with different variables, a similar procedure is repeated in thek=1k=1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!