12.07.2015 Views

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

8.5. Variable Selection in Regression Using <strong>Principal</strong> <strong>Component</strong>s 185ridge regression, although this latter conclusion is disputed by S. Wold inthe published discussion that follows the article.Naes and Isaksson (1992) use a locally weighted version of PC regressionin the calibration of spectroscopic data. PCA is done on the predictorvariables, and to form a predictor for a particular observation only the kobservations closest to the chosen observation in the space of the first mPCs are used. These k observations are given weights in a regression of thedependent variable on the first m PCs whose values decrease as distancefrom the chosen observation increases. The values of m and k are chosenby cross-validation, and the technique is shown to outperform both PCregression and PLS.Bertrand et al. (2001) revisit latent root regression, and replace the PCAof the matrix of (p + 1) variables formed by y together with X by theequivalent PCA of y together with the PC scores Z. This makes it easierto identify predictive and non-predictive multicollinearities, and gives asimple expression for the MSE of the latent root estimator. Bertrand et al.(2001) present their version of latent root regression as an alternative toPLS or PC regression for near infrared spectroscopic data.Marx and Smith (1990) extend PC regression from linear models to generalizedlinear models. Straying further from ordinary PCA, Li et al. (2000)discuss principal Hessian directions, which utilize a variety of generalizedPCA (see Section 14.2.2) in a regression context. These directions are usedto define splits in a regression tree, where the objective is to find directionsalong which the regression surface ‘bends’ as much as possible. A weightedcovariance matrix S W is calculated for the predictor variables, where theweights are residuals from a multiple regression of y on all the predictorvariables. Given the (unweighted) covariance matrix S, their derivation ofthe first principal Hessian direction is equivalent to finding the first eigenvectorin a generalized PCA of S W with metric Q = S −1 and D = 1 n I n,inthe notation of Section 14.2.2.8.5 Variable Selection in Regression Using<strong>Principal</strong> <strong>Component</strong>s<strong>Principal</strong> component regression, latent root regression, and other biased regressionestimates keep all the predictor variables in the model, but changethe estimates from least squares estimates in a way that reduces the effectsof multicollinearity. As mentioned in the introductory section of thischapter, an alternative way of dealing with multicollinearity problems is touse only a subset of the predictor variables. Among the very many possiblemethods of selecting a subset of variables, a few use PCs.As noted in the previous section, the procedures due to Hawkins (1973)and Hawkins and Eplett (1982) can be used in this way. Rotation of the PCs

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!