12.07.2015 Views

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

8.4. Variations on <strong>Principal</strong> <strong>Component</strong> Regression 179choices of the dummy observations and their variances. In a slightly differentapproach to the same topic, Hocking et al. (1976) give a broad classof biased estimators, which includes all the above estimators, includingthose derived from PC regression, as special cases. Oman (1978) showshow several biased regression methods, including PC regression, can befitted into a Bayesian framework by using different prior distributions forβ; Leamer and Chamberlain (1976) also look at a Bayesian approach toregression, and its connections with PC regression. Other biased estimatorshave been suggested and compared with PC regression by Iglarsh andCheng (1980) and Trenkler (1980), and relationships between ridge regressionand PC regression are explored further by Hsuan (1981). Trenkler andTrenkler (1984) extend Hsuan’s (1981) results, and examine circumstancesin which ridge and other biased estimators can be made close to PC regressionestimators, where the latter are defined by the restrictive equation(8.1.10).Hoerl et al. (1986) describe a simulation study in which PC regression iscompared with other biased estimators and variable selection methods, andfound to be inferior. However, the comparison is not entirely fair. Severalvarieties of ridge regression are included in the comparison, but only oneway of choosing M is considered for PC regression. This is the restrictivechoice of M consisting of 1, 2,...,m, where m is the largest integer forwhich a t-test of the PC regression coefficient γ m gives a significant result.Hoerl et al. (1986) refer to a number of other simulation studies comparingbiased regression methods, some of which include PC regression. Theoreticalcomparisons between PC regression, least squares and ridge regressionwith respect to the predictive ability of the resulting regression equationsare made by Gunst and Mason (1979) and Friedman and Montgomery(1985), but only for p =2.Essentially the same problem arises for all these biased methods as occurredin the choice of M for PC regression, namely, the question of whichcompromise should be chosen in the trade-off between bias and variance.In ridge regression, this compromise manifests itself in the choice of κ, andfor shrinkage estimators the amount of shrinkage must be determined. Suggestionshave been made regarding rules for making these choices, but thedecision is usually still somewhat arbitrary.8.4 Variations on <strong>Principal</strong> <strong>Component</strong> RegressionMarquardt’s (1970) fractional rank estimator, which was described in theprevious section, is one modification of PC regression as defined in Section8.1, but it is a fairly minor modification. Another approach, suggestedby Oman (1991), is to use shrinkage estimators, but instead of shrinkingthe least squares estimators towards zero or some other constant, the

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!