Recursive subspace identification for in-flight modal ... - ResearchGate
Recursive subspace identification for in-flight modal ... - ResearchGate
Recursive subspace identification for in-flight modal ... - ResearchGate
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
FLITE EUREKA 2 1565<br />
algorithms. Input block Hankel matrices are def<strong>in</strong>ed as follows<br />
⎛<br />
⎞<br />
u 1 u 2 u 3 · · · u j<br />
u 2 u 3 u 4 · · · u j+1<br />
. . . · · · .<br />
U =<br />
u i u i+1 u i+2 · · · u i+j−1<br />
u i+1 u i+2 u i+3 · · · u i+j<br />
=<br />
u i+2 u i+3 u i+4 · · · u i+j+1<br />
⎜<br />
⎟<br />
⎝ . . . · · · . ⎠<br />
u 2i u 2i+1 u 2i+2 · · · u 2i+j−1<br />
(<br />
Up<br />
U f<br />
)<br />
,<br />
where the number of block rows i <strong>in</strong> U p and U f is a user-def<strong>in</strong>ed <strong>in</strong>dex, which is large enough, i.e. il ≥ n,<br />
the number of columns j is typically equal to s − 2i + 1, where s is the number of available data samples.<br />
The subscript ‘p’ stands <strong>for</strong> ‘past’ and the subscript ‘f’ <strong>for</strong> ‘future’. The output block Hankel matrices Y ,<br />
Y p , Y f are def<strong>in</strong>ed <strong>in</strong> a similar way.<br />
The MOESP algorithms [21, 23–25], on which most recursive <strong>subspace</strong> <strong>identification</strong> methods are based,<br />
start from the so called past and future data equations [26]<br />
Y p = Γ i X p + H i U p + N p<br />
Y f = Γ i X f + H i U f + N f ,<br />
where Γ i = ( C T (CA) T · · · (CA i−1 ) T ) T ∈ R li×n is the extended observability matrix, X p (respectively<br />
X f ) is a past (respectively future) state sequence, H i is the block Toeplitz matrix of the (unknown)<br />
impulse response from u to y and N p (respectively N f ) a particular comb<strong>in</strong>ation of the past (respectively future)<br />
block Hankel matrices of the perturbations v and w. For simultaneously remov<strong>in</strong>g the term H i U f from<br />
Y f and decorrelat<strong>in</strong>g the noise, it is proposed to consider the follow<strong>in</strong>g quantity: Y f Π U ⊥<br />
f<br />
Ξ p , where Ξ p is an<br />
<strong>in</strong>strumental variable composed by past <strong>in</strong>put (and output) data and where 1 Π U ⊥<br />
f<br />
= I j − U T f (U f U T f )† U f .<br />
Indeed, it can be proved that, under particular rank and excitation conditions [21]<br />
1<br />
lim<br />
s→∞ s Y 1<br />
f Π U ⊥Ξ p = lim<br />
f s→∞ s Γ iX f Π U ⊥Ξ p ,<br />
f<br />
with s the number of data po<strong>in</strong>ts. This data compression can be efficiently computed by means of the<br />
follow<strong>in</strong>g LQ decomposition 2<br />
⎛ ⎞ ⎛<br />
⎞ ⎛<br />
U f L 11 0 0 Q T ⎞<br />
1<br />
⎝Ξ p<br />
⎠ = ⎝L 21 L 22 0 ⎠ ⎝Q T ⎠<br />
2<br />
(1)<br />
Y f L 31 L 32 L 33 Q T 3<br />
s<strong>in</strong>ce lim s→∞<br />
1<br />
s span col {L 32} = span col {Γ i }. The estimation of the observability matrix is then realized by<br />
consider<strong>in</strong>g the follow<strong>in</strong>g SVD<br />
L 32 = ( U 1 U 2<br />
) ( S 1 0<br />
0 S 2<br />
) (V1<br />
V 2<br />
) T ,<br />
where U 1 ∈ R il×n , S 1 ∈ R n×n and V 1 ∈ R j×n . An estimate <strong>for</strong> the matrices A and C, up to a similarity<br />
trans<strong>for</strong>mation, can then be obta<strong>in</strong>ed as follows: Ĉ is equal to the first l rows of U 1 and  is equal to U 1 † U 1 ,<br />
with U 1 and U 1 shorthand notations <strong>for</strong> U 1 with its last, respectively first l l<strong>in</strong>es removed. This MOESP<br />
scheme is named PI MOESP when Ξ p = U p and PO MOESP when Ξ p = ( U T p<br />
1 • † denotes the Moore-Penrose pseudo-<strong>in</strong>verse of the matrix •.<br />
2 The LQ decomposition of a matrix A is the transpose of the QR decomposition of A T .<br />
Y T<br />
p<br />
) T<br />
[21].