Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s) Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

cda.psych.uiuc.edu
from cda.psych.uiuc.edu More from this publisher
12.07.2015 Views

14.2. Weights, Metrics, Transformations and Centerings 385PC framework above unless the w ij can be written as products w ij =ω i φ j ,i=1, 2,...,n; j =1, 2,...,p, although this method involves similarideas. The examples given by Gabriel and Zamir (1979) can be expressed ascontingency tables, so that correspondence analysis rather than PCA maybe more appropriate, and Greenacre (1984), too, develops generalized PCAas an offshoot of correspondence analysis (he shows that another specialcase of the generalized SVD (14.2.2) produces correspondence analysis, aresult which was discussed further in Section 13.1). The idea of weightingcould, however, be used in PCA for any type of data, provided that suitableweights can be defined.Gabriel and Zamir (1979) suggest a number of ways in which special casesof their weighted analysis may be used. As noted in Section 13.6, it canaccommodate missing data by giving zero weight to missing elements of X.Alternatively, the analysis can be used to look for ‘outlying cells’ in a datamatrix. This can be achieved by using similar ideas to those introducedin Section 6.1.5 in the context of choosing how many PCs to retain. Anyparticular element x ij of X is estimated by least squares based on a subsetof the data that does not include x ij . This (rank m) estimate mˆx ij isreadily found by equating to zero a subset of weights in (14.2.5), includingw ij , The difference between x ij and mˆx ij provides a better measure of the‘outlyingness’ of x ij compared to the remaining elements of X, than doesthe difference between x ij and a rank m estimate, m˜x ij , based on the SVDfor the entire matrix X. This result follows because mˆx ij is not affected byx ij , whereas x ij contributes to the estimate m˜x ij .Commandeur et al. (1999) describe how to introduce weights for bothvariables and observations into Meulman’s (1986) distance approach tononlinear multivariate data analysis (see Section 14.1.1).In the standard atmospheric science set-up, in which variables correspondto spatial locations, weights may be introduced to take account of unevenspacing between the locations where measurements are taken. The weightsreflect the size of the area for which a particular location (variable) isthe closest point. This type of weighting may also be necessary when thelocations are regularly spaced on a latitude/longitude grid. The areas of thecorresponding grid cells decrease towards the poles, and allowance shouldbe made for this if the latitudinal spread of the data is moderate or large. Anobvious strategy is to assign to the grid cells weights that are proportionalto their areas. However, if there is a strong positive correlation within cells,it can be argued that doubling the area, for example, does not double theamount of independent information and that weights should reflect this.Folland (1988) implies that weights should be proportional to (Area) c ,where c is between 1 2and 1. Hannachi and O’Neill (2001) weight their databy the cosine of latitude.Buell (1978) and North et al. (1982) derive weights for irregularly spacedatmospheric data by approximating a continuous version of PCA, based onan equation similar to (12.3.1).

386 14. Generalizations and Adaptations of Principal Component Analysis14.2.2 MetricsThe idea of defining PCA with respect to a metric or an inner-product datesback at least to Dempster (1969, Section 7.6). Following the publication ofCailliez and Pagès (1976) it became, together with an associated ‘dualitydiagram,’ a popular view of PCA in France in the 1980s (see, for example,Caussinus, 1986; Escoufier, 1987). In this framework, PCA is defined interms of a triple (X, Q, D), the three elements of which are:• the matrix X is the (n × p) data matrix, which is usually but notnecessarily column-centred;• the (p × p) matrix Q defines a metric on the p variables, so that thedistance between two observations x j and x k is (x j −x k ) ′ Q(x j −x k );• the (n × n) matrix D is usually diagonal, and its diagonal elementsconsist of a set of weights for the n observations. It can, however, bemore general, for example when the observations are not independent,as in time series (Caussinus, 1986; Escoufier, 1987).The usual definition of covariance-based PCA has Q = I p the identitymatrix, and D = 1 n I n, though to get the sample covariance matrix withdivisor (n − 1) it is necessary to replace n by (n − 1) in the definition ofD, leading to a set of ‘weights’ which do not sum to unity. CorrelationbasedPCA is achieved either by standardizing X, or by taking Q to bethe diagonal matrix whose jth diagonal element is the reciprocal of thestandard deviation of the jth variable, j =1, 2,...,p.Implementation of PCA with a general triple (X, Q, D) is readilyachieved by means of the generalized SVD, described in Section 14.2.1,with Φ and Ω from that section equal to Q and D from this section. Thecoefficients of the generalized PCs are given in the columns of the matrixB defined by equation (14.2.2). Alternatively, they can be found from aneigenanalysis of X ′ DXQ or XQX ′ D (Escoufier, 1987).A number of particular generalizations of the standard form of PCA fitwithin this framework. For example, Escoufier (1987) shows that, in additionto the cases already noted, it can be used to: transform variables; toremove the effect of an observation by putting it at the origin; to look atsubspaces orthogonal to a subset of variables; to compare sample and theoreticalcovariance matrices; and to derive correspondence and discriminantanalyses. Maurin (1987) examines how the eigenvalues and eigenvectors ofa generalized PCA change when the matrix Q in the triple is changed.The framework also has connections with the fixed effects model of Section3.9. In that model, the observations x i are such that x i = z i + e i ,where z i lies in a q-dimensional subspace and e i is an error term with zeromean and covariance matrix σ2w iΓ. Maximum likelihood estimation of themodel, assuming a multivariate normal distribution for e, leads to a generalizedPCA, where D is diagonal with elements w i and Q (which is denoted

386 14. Generalizations and Adaptations of <strong>Principal</strong> <strong>Component</strong> <strong>Analysis</strong>14.2.2 MetricsThe idea of defining PCA with respect to a metric or an inner-product datesback at least to Dempster (1969, Section 7.6). Following the publication ofCailliez and Pagès (1976) it became, together with an associated ‘dualitydiagram,’ a popular view of PCA in France in the 1980s (see, for example,Caussinus, 1986; Escoufier, 1987). In this framework, PCA is defined interms of a triple (X, Q, D), the three elements of which are:• the matrix X is the (n × p) data matrix, which is usually but notnecessarily column-centred;• the (p × p) matrix Q defines a metric on the p variables, so that thedistance between two observations x j and x k is (x j −x k ) ′ Q(x j −x k );• the (n × n) matrix D is usually diagonal, and its diagonal elementsconsist of a set of weights for the n observations. It can, however, bemore general, for example when the observations are not independent,as in time series (Caussinus, 1986; Escoufier, 1987).The usual definition of covariance-based PCA has Q = I p the identitymatrix, and D = 1 n I n, though to get the sample covariance matrix withdivisor (n − 1) it is necessary to replace n by (n − 1) in the definition ofD, leading to a set of ‘weights’ which do not sum to unity. CorrelationbasedPCA is achieved either by standardizing X, or by taking Q to bethe diagonal matrix whose jth diagonal element is the reciprocal of thestandard deviation of the jth variable, j =1, 2,...,p.Implementation of PCA with a general triple (X, Q, D) is readilyachieved by means of the generalized SVD, described in Section 14.2.1,with Φ and Ω from that section equal to Q and D from this section. Thecoefficients of the generalized PCs are given in the columns of the matrixB defined by equation (14.2.2). Alternatively, they can be found from aneigenanalysis of X ′ DXQ or XQX ′ D (Escoufier, 1987).A number of particular generalizations of the standard form of PCA fitwithin this framework. For example, Escoufier (1987) shows that, in additionto the cases already noted, it can be used to: transform variables; toremove the effect of an observation by putting it at the origin; to look atsubspaces orthogonal to a subset of variables; to compare sample and theoreticalcovariance matrices; and to derive correspondence and discriminantanalyses. Maurin (1987) examines how the eigenvalues and eigenvectors ofa generalized PCA change when the matrix Q in the triple is changed.The framework also has connections with the fixed effects model of Section3.9. In that model, the observations x i are such that x i = z i + e i ,where z i lies in a q-dimensional subspace and e i is an error term with zeromean and covariance matrix σ2w iΓ. Maximum likelihood estimation of themodel, assuming a multivariate normal distribution for e, leads to a generalizedPCA, where D is diagonal with elements w i and Q (which is denoted

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!