11.07.2015 Views

Bootstrap independence test for functional linear models

Bootstrap independence test for functional linear models

Bootstrap independence test for functional linear models

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

2.2 Linear <strong>independence</strong> <strong>test</strong>Given a generic H–valued random element H such that E(‖H‖ 2 ) < ∞, its associated covarianceoperator Γ H is defined as the operator Γ H : H → HΓ H (h) = E (〈H − µ H , h〉(H − µ H )) = E (〈H, h〉H) − 〈µ H , h〉µ H ,<strong>for</strong> all h ∈ H, where µ H ∈ H denotes the expected value of H. From now on, it will be assumedthat E(‖X‖ 2 ) < ∞, and thus, as a consequence of Hölder’s inequality, E(Y 2 ) < ∞. Wheneverthere is no possible confusion, Γ X will be abbreviated as Γ. It is well–known that Γ is a nuclearand self–adjoint operator. In particular, it is a compact operator of trace class and thus, in virtueof the Spectral Theorem Decomposition, there is an orthonormal basis of H, {v j } j∈N , consisting oneigenvectors of Γ with corresponding eigenvalues {λ j } j∈N , that is, Γ(v j ) = λ j v j <strong>for</strong> all j ∈ N. Asusual, the eigenvalues are assumed to be arranged in decreasing order (λ 1 ≥ λ 2 ≥ . . .). Since theoperator Γ is symmetric and non–negative definite, then the eigenvalues are non–negative.In a similar way, let us consider the cross–covariance operator ∆ : H → R between X and Y givenby∆(h) = E (〈X − µ X , h〉(Y − µ Y )) = E (〈X, h〉Y ) − 〈µ X , h〉µ Y ,<strong>for</strong> all h ∈ H, where µ Y ∈ R denotes the expected value of Y . Of course, ∆ ∈ H ′ and the followingrelation between the considered operators and the regression parameter Θ is satisfied∆(·) = 〈Γ(·), Θ〉. (4)The Hilbert space H can be expressed as the direct sum of the two orthogonal subspaces inducedby the self–adjoint operator Γ: the kernel or null space of Γ, N (Γ), and the closure of the image orrange of Γ, R(Γ). Thus, Θ is determined uniquely by Θ = Θ 1 +Θ 2 with Θ 1 ∈ N (Γ) and Θ 2 ∈ R(Γ).As Θ 1 ∈ N (Γ), it is easy to check that V ar(〈X, Θ 1 〉) = 0 and, consequently, the model introducedin (1) can be expressed asY = 〈Θ 2 , X〉 + 〈Θ 1 , µ X 〉 + b + ε.There<strong>for</strong>e, it is not possible to distinguish between the term 〈Θ 1 , µ X 〉 and the intercept term b,and consequently it is not possible to check whether Θ 1 = 0 or not. Taking this into account, thehypothesis <strong>test</strong> will be restricted to check{H0 : Θ 2 = 0(5)H 1 : Θ 2 ≠ 0on the basis of the available sample in<strong>for</strong>mation.Note that in this case, according to the relation between the operators and the regression parametershown in (4), Θ 2 = 0 if, and only if, ∆(h) = 0 <strong>for</strong> all h ∈ H. Consequently, the hypothesis <strong>test</strong> in(5) is equivalent to{H0 : ‖∆‖ ′ = 0H 1 : ‖∆‖ ′ (6)≠ 0Remark 1. It should be recalled that, in previous works µ X is assumed to be equal 0. Thus,the preceding reasoning leads to the fact that Θ 1 cannot be estimated based on the in<strong>for</strong>mationprovided by X (see, <strong>for</strong> instance, Cardot, Ferraty, Mas, and Sarda (2003)). Consequently thehypothesis <strong>test</strong>ing is also restricted to the one in the preceding equations. In addition in Cardot,Ferraty, Mas, and Sarda (2003), it is also assumed <strong>for</strong> technical reasons that R(Γ) is an infinite–dimensional space. On the contrary, this restriction is not imposed in the study here developed.4

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!