Copulas: a Review and Recent Developments (2007)

Copulas: a Review and Recent Developments (2007) Copulas: a Review and Recent Developments (2007)

12.07.2015 Views

3.3 Copula representation via a local measure of dependenceThe characterization of copulas as well as the choice of the dependence structure aredi±cult problems. For example, the choice of the copula does not inform explicitlyabout the type of the dependence structure between variables involved. As one cansee, the primary task is just to choose an appropriate copula function, where themarginal distributions are treated as nuisance parameters. But what is the meaningof \appropriate"? The aim here is to give a partial answer of this question.The key statistic in a joint Gaussian distribution is the correlation coe±cient. Usually,a low correlation coe±cient between two markets implies a good opportunity foran investor to diversify his investment risk. Hence, based on Gaussian assumption,an investor can signi¯cantly reduce his risk by balancing his portfolio with investmentsintheforeignmarket.Embrechtsetal. (2002) discussed some pitfalls of theusual correlation coe±cient as a global measure of dependence. A Gaussian copularepresents the dependence structure of a joint normal distribution, but a Gaussiancopula does not necessarily imply a joint normal distribution unless the marginals arealso normal. There is increasing evidence indicating that Gaussian assumptions areinappropriate in the real world. It has been found that correlations computed withdi®erent conditions could di®er dramatically, see for example Ang et al. (2002) whostudied the correlations between a portfolio and the market conditional on downsidemovements. It has been found that correlations conditional on large movements arehigher than that conditional on small movements. This phenomena has also beencharacterized as \correlation breakdown", and it is widely discussed in the literature.Boyer et al. (1999) proposed that in situations of \correlation breakdown", thecorrelations can reveal little about the underlying nature of dependence, see also theapproach suggested by Engle (2002). Therefore, although conditional correlationsprovide more information about the dependence than the usual Pearson's correlation,the results are sometimes misleading and need to be interpreted very carefully.In general, the local dependence measures are attractive because they o®er a moreprecise radiography of dependence structure than the corresponding global measures.The local correlation dependence measures are usually de¯ned as a correlation betweenX and Y given X = x and Y = y (or given X · x and Y · y). For example,Kotz and Nadarajah (2002) proposed a new local measure derived from the linearcorrelation coe±cient as followsµ £X ¤£ ¤ E ¡ E(XjY = y) Y ¡ E(Y jX = x)°(x; y) = qE £ X ¡ E(XjY = y) ¤2 E £ Y ¡ E(Y jX = x) ¤ ; (x; y) 2 [¡1; 2 1]2 :The measure °(x; y) is a radical generalization of the usual Pearson's correlationcoe±cient and characterizes the e®ect of X on Y (and vise versa) conditionally on(X; Y ) being at the point (x; y) 2 [¡1; 1] 2 . The \conditional" correlation coe±cient°(x; y) overcomes the weakness of other known measures of local dependence, see for26

example Doksum et al. (1994), Jones (1998), Charpentier (2003) and referencestherein.However, the measure of local dependence °(x; y) is in°uenced by the marginals.This means that for the same dependence structure the change of the marginals wouldimply the change of the measure. From Sklar's Theorem we know that dependencestructure do not depend on marginals. Therefore, the measure °(x; y) is not suitablefor describing the dependence structure in the sense of Sklar's Theorem.In order to avoid this weakness, let us consider a simple modi¯cation based on thefact that the Spearman coe±cient ½ S of (X; Y ) is equal to the Pearson's correlationcoe±cient between F (X) =U and G(Y )=V ,i.e.µ £U ¤£ ¤ E ¡ E(U) V ¡ E(V ) Z 1 Z 1·¸½ S = p =12 C(u; v) ¡ uv dudv;Var(U)Var(V )e.g. Nelsen (1999), p. 138. So, an alternative version of °(x; y) canbegivenbyµ £U ¤£ ¤ E ¡ E(UjV = v) V ¡ E(V jU = u)° S (u; v) = qE £ U ¡ E(UjV = v) ¤ 2 £ ¤ ; (u; v) 2 [0; 1] 2 ;2E V ¡ E(V jU = u)00which is already scale invariant with respect to the uniform marginals. The measure° S can be interpreted as a \conditional" Spearman coe±cient. Nevertheless, we areunable to suggest a copula representation based on ° S .We propose here a local dependence measure di®erent than existing ones, whichcan be used as a new tool for representation of the bivariate copulas. The importanceis that we can decompose any copula C in two parts: the marginals and the dependencestructure embodied in the local dependence measure that can be interpretedas a \local" Spearman coe±cient (but not \conditional" as ° S is). We denote it by½ C and call it Spearman function.In Anjos and Kolev (2005b) is shown that for every bivariate copula C there existsa unique continuous function ½ C such that, for all (u; v) 2 [0; 1] 2 we haveC(u; v) =uv + ½ C (u; v) p uv(1 ¡ u)(1 ¡ v):The advantage is that ½ C provides an explicit and precise information of theunderlying dependence structure. A connection with the Gaussian copula is givenbelow.Theorem (Anjos and Kolev (2005b)). Let C beacopulaandlet© r be a Gaussiancopula with correlation coe±cient r. Then for each pair (u; v) 2 [0; 1] 2 there exist aunique smallest valuer u;v = inffr :© r¡© ¡1 (u); © ¡1 (v) ¢ ¸ C(u; v)g 2[¡1; 1]; (7)27

3.3 Copula representation via a local measure of dependenceThe characterization of copulas as well as the choice of the dependence structure aredi±cult problems. For example, the choice of the copula does not inform explicitlyabout the type of the dependence structure between variables involved. As one cansee, the primary task is just to choose an appropriate copula function, where themarginal distributions are treated as nuisance parameters. But what is the meaningof \appropriate"? The aim here is to give a partial answer of this question.The key statistic in a joint Gaussian distribution is the correlation coe±cient. Usually,a low correlation coe±cient between two markets implies a good opportunity foran investor to diversify his investment risk. Hence, based on Gaussian assumption,an investor can signi¯cantly reduce his risk by balancing his portfolio with investmentsintheforeignmarket.Embrechtsetal. (2002) discussed some pitfalls of theusual correlation coe±cient as a global measure of dependence. A Gaussian copularepresents the dependence structure of a joint normal distribution, but a Gaussiancopula does not necessarily imply a joint normal distribution unless the marginals arealso normal. There is increasing evidence indicating that Gaussian assumptions areinappropriate in the real world. It has been found that correlations computed withdi®erent conditions could di®er dramatically, see for example Ang et al. (2002) whostudied the correlations between a portfolio <strong>and</strong> the market conditional on downsidemovements. It has been found that correlations conditional on large movements arehigher than that conditional on small movements. This phenomena has also beencharacterized as \correlation breakdown", <strong>and</strong> it is widely discussed in the literature.Boyer et al. (1999) proposed that in situations of \correlation breakdown", thecorrelations can reveal little about the underlying nature of dependence, see also theapproach suggested by Engle (2002). Therefore, although conditional correlationsprovide more information about the dependence than the usual Pearson's correlation,the results are sometimes misleading <strong>and</strong> need to be interpreted very carefully.In general, the local dependence measures are attractive because they o®er a moreprecise radiography of dependence structure than the corresponding global measures.The local correlation dependence measures are usually de¯ned as a correlation betweenX <strong>and</strong> Y given X = x <strong>and</strong> Y = y (or given X · x <strong>and</strong> Y · y). For example,Kotz <strong>and</strong> Nadarajah (2002) proposed a new local measure derived from the linearcorrelation coe±cient as followsµ £X ¤£ ¤ E ¡ E(XjY = y) Y ¡ E(Y jX = x)°(x; y) = qE £ X ¡ E(XjY = y) ¤2 E £ Y ¡ E(Y jX = x) ¤ ; (x; y) 2 [¡1; 2 1]2 :The measure °(x; y) is a radical generalization of the usual Pearson's correlationcoe±cient <strong>and</strong> characterizes the e®ect of X on Y (<strong>and</strong> vise versa) conditionally on(X; Y ) being at the point (x; y) 2 [¡1; 1] 2 . The \conditional" correlation coe±cient°(x; y) overcomes the weakness of other known measures of local dependence, see for26

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!