28.02.2015 Views

Recursive subspace identification for in-flight modal ... - ResearchGate

Recursive subspace identification for in-flight modal ... - ResearchGate

Recursive subspace identification for in-flight modal ... - ResearchGate

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

1566 PROCEEDINGS OF ISMA2006<br />

4 <strong>Recursive</strong> <strong>subspace</strong> <strong>identification</strong><br />

As <strong>in</strong>troduced previously, one of the reasons <strong>for</strong> the success of <strong>subspace</strong> model <strong>identification</strong> (SMI) techniques<br />

lies <strong>in</strong> the direct correspondence between geometric operations on matrices constructed from <strong>in</strong>putoutput<br />

data and their implementation <strong>in</strong> terms of well known, stable and reliable algorithms from the field of<br />

numerical l<strong>in</strong>ear algebra. However, most of the available batch SMI techniques are based on tools such as<br />

the SVD which are not suitable <strong>for</strong> onl<strong>in</strong>e implementation due to their computational complexity. Several recursive<br />

<strong>subspace</strong> <strong>identification</strong> algorithms have been proposed dur<strong>in</strong>g the last years [2,5,7,10,12,14,16,22]<br />

to avoid the use of such burdensome tools. More precisely, most recent recursive algorithms circumvent<br />

this problem by consider<strong>in</strong>g the similarities between recursive <strong>subspace</strong> <strong>identification</strong> and adaptive signal<br />

process<strong>in</strong>g techniques <strong>for</strong> direction of arrival estimation [9], such as the projection approximation <strong>subspace</strong><br />

track<strong>in</strong>g technique [27] and the propagator method [15].<br />

Because we are concerned with the recursive estimation of the resonance frequencies and the damp<strong>in</strong>g ratios,<br />

we will concentrate on the estimation of the matrices A and C and will not deal with the estimation of B and<br />

D. There<strong>for</strong>e, only the extended observability matrix Γ i needs to estimated recursively from the updates of<br />

the <strong>in</strong>put-output data u and y. To this purpose, consider the block Hankel matrices U p , U f , Y p and Y f and<br />

assume that, at time s + 1, new data samples y s+1 and u s+1 are acquired. Then, each of the block Hankel<br />

matrices is modified by the addition of a column<br />

u p,s+1 = ( u T j+1 · · · u T ) T<br />

i+j ∈ R<br />

mi×1<br />

y p,s+1 = ( yj+1 T · · · yi+j<br />

T ) T<br />

∈ R<br />

li×1<br />

u f,s+1 = ( u T i+j+1 · · · u T ) T<br />

2i+j ∈ R<br />

mi×1<br />

y f,s+1 = ( yi+j+1 T · · · y2i+j) T T<br />

∈ R li×1 .<br />

Then, it is easy to show that<br />

y p,s+1 = Γ i x j+1 + H i u p,s+1 + n p,s+1<br />

y f,s+1 = Γ i x i+j+1 + H i u f,s+1 + n f,s+1 ,<br />

where the past and future stacked noise vectors are def<strong>in</strong>ed <strong>in</strong> the same way as, e.g. u p,s+1 and u f,s+1 .<br />

<strong>Recursive</strong> <strong>subspace</strong> <strong>identification</strong> algorithms consist, as their batch counterparts, of two steps. First, the data<br />

compression phase is updated. This can be done by updat<strong>in</strong>g the LQ decomposition (1) by means of Givens<br />

rotations [6], as described <strong>in</strong> [10, 12]. Second, the column space of Γ i is updated. This is usually done by<br />

solv<strong>in</strong>g a m<strong>in</strong>imization problem [5, 7, 10, 12, 16] <strong>in</strong> order to avoid an SVD.<br />

Two recursive <strong>subspace</strong> <strong>identification</strong> methods are considered <strong>in</strong> more detail <strong>in</strong> the follow<strong>in</strong>g. The first<br />

method, named RPM2, is described <strong>in</strong> Section 4.1. The second one, called EIVPM [13], is given <strong>in</strong> Section<br />

4.2.<br />

4.1 RPM2<br />

The first idea <strong>in</strong> this algorithm is to update the LQ factorization (see Equation (1)) of the PI/PO schemes by<br />

means of Givens rotations. For that purpose, assume that, at time s+1, a new set of <strong>in</strong>put-output data vectors<br />

becomes available. Then, a new column is added to the data matrices and the decomposition must be written<br />

as<br />

⎛ ⎞<br />

⎛ ⎞ ⎛<br />

⎞ Q<br />

U f u f,s+1 L 11,s 0 0 u 1,s 0<br />

f,s+1<br />

⎝Ξ p ξ p,s+1<br />

⎠ = ⎝L 21,s L 22,s 0 ξ p,s+1<br />

⎠ ⎜Q 2,s 0<br />

⎟<br />

⎝Q Y f y f,s+1 L 31,s L 32,s L 33,s y 3,s 0⎠ ,<br />

f,s+1<br />

0 1<br />

where ξ p,s+1 = u p,s+1 <strong>for</strong> PI MOESP and ξ p,s+1 = ( u T p,s+1 yp,s+1) T T<br />

<strong>for</strong> PO MOESP. Givens rotations are<br />

then used twice to update this factorization. They are first applied <strong>in</strong> order to zero out the elements of vector<br />

u f,s+1 , br<strong>in</strong>g<strong>in</strong>g the L factor to the <strong>for</strong>m<br />

⎛<br />

⎞<br />

L 11,s+1 0 0 0<br />

⎝L 21,s+1 L 22,s 0 ¯z p,s+1<br />

⎠ .<br />

L 31,s+1 L 32,s L 33,s ¯z f,s+1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!