12.07.2015 Views

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

Jolliffe I. Principal Component Analysis (2ed., Springer, 2002)(518s)

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

9.2. Cluster <strong>Analysis</strong> 213Before looking at examples of the uses just described of PCA in clusteranalysis, we discuss a rather different way in which cluster analysis canbe used and its connections with PCA. So far we have discussed clusteranalysis on observations or individuals, but in some circumstances it isdesirable to divide variables, rather than observations, into groups. In fact,by far the earliest book on cluster analysis (Tryon, 1939) was concernedwith this type of application. Provided that a suitable measure of similaritybetween variables can be defined—the correlation coefficient is an obviouscandidate—methods of cluster analysis used for observations can be readilyadapted for variables.One connection with PCA is that when the variables fall into well-definedclusters, then, as discussed in Section 3.8, there will be one high-variancePC and, except in the case of ‘single-variable’ clusters, one or more lowvariancePCs associated with each cluster of variables. Thus, PCA willidentify the presence of clusters among the variables, and can be thoughtof as a competitor to standard cluster analysis of variables. The use ofPCA in this way in fairly common in climatology (see, for example, Cohen(1983), White et al. (1991), Romero et al. (1999)). In an analysis of aclimate variable recorded at stations over a large geographical area, theloadings of the PCs at the various stations can be used to divide the areainto regions with high loadings on each PC. In fact, this regionalizationprocedure is usually more effective if the PCs are rotated (see Section 11.1)so that most analyses are done using rotated loadings.Identifying clusters of variables may be of general interest in investigatingthe structure of a data set but, more specifically, if we wish to reducethe number of variables without sacrificing too much information, then wecould retain one variable from each cluster. This is essentially the ideabehind some of the variable selection techniques based on PCA that weredescribed in Section 6.3.Hastie et al. (2000) describe a novel clustering procedure for ‘variables’which uses PCA applied in a genetic context. They call their method ‘geneshaving.’ Their data consist of p = 4673 gene expression measurementsfor n = 48 patients, and the objective is to classify the 4673 genes intogroups that have coherent expressions. The first PC is found for these dataand a proportion of the genes (typically 10%) having the smallest absoluteinner products with this PC are deleted (shaved). PCA followed by shavingis repeated for the reduced data set, and this procedure continues untilultimately only one gene remains. A nested sequence of subsets of genesresults from this algorithm and an optimality criterion is used to decidewhich set in the sequence is best. This gives the first cluster of genes. Thewhole procedure is then repeated after centering the data with respect tothe ‘average gene expression’ in the first cluster, to give a second clusterand so on.Another way of constructing clusters of variables, which simultaneouslyfinds the first PC within each cluster, is proposed by Vigneau and Qannari

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!