02.08.2013 Views

Model Predictive Control

Model Predictive Control

Model Predictive Control

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

12.5 State Feedback Solution, 1-Norm and ∞-Norm Case 257<br />

Proof: Since xk = Akx0 + k−1 k=0 Ai [B(w p<br />

k−1−i )uk−1−i + Ewa k−1−i ] is a linear<br />

function of the disturbances w a {w a 0, . . .,w a N−1 }, and wp {w p<br />

0<br />

, . . .,wp<br />

N−1<br />

} for a<br />

fixed input sequence and x0, the cost function in the maximization problem (12.41)<br />

is convex and piecewise affine with respect to the optimization vectors w a and w p<br />

and the parameters U0, x0. The constraints in (12.44) are linear in U0 and x0,<br />

for any w a and w p . Therefore, by Lemma 9, problem (12.41)–(12.45) can be<br />

solved by solving an mp-LP through the enumeration of all the vertices of the sets<br />

W a × W a × . . . × W a and W p × W p × . . . × W p . The theorem follows from the<br />

mp-LP properties described Theorem 6.4. ✷<br />

Remark 12.6 In case of OL-CROC with additive disturbances only (w(t) = 0) the<br />

number of constraints in (12.45) can be reduced as explained in Remark 12.5.<br />

12.5.2 Recursive Approach: Closed-Loop Predictions<br />

Theorem 12.3 There exists a state-feedback control law u ∗ (k) = fk(x(k)), fk :<br />

Xk ⊆ R n → U ⊆ R m , solution of the CROC-CL (12.47)–(12.51) with cost (12.42)<br />

and k = 0, . . . , N − 1 which is time-varying, continuous and piecewise affine on<br />

polyhedra<br />

fk(x) = F i k x + gi k if x ∈ CR i k , i = 1, . . .,Nr k (12.78)<br />

where the polyhedral sets CRi k = {x ∈ Rn : Hi kx ≤ Ki k }, i = 1, . . .,Nr k are a<br />

partition of the feasible polyhedron Xk. Moreover fi, i = 0, . . .,N − 1 can be found<br />

by solving N mp-LPs.<br />

Proof: Consider the first step j = N − 1 of dynamic programming applied to<br />

the CROC-CL problem (12.47)–(12.49) with cost (12.42)<br />

J ∗ N−1 (xN−1) min JN−1(xN−1, uN−1) (12.79)<br />

uN −1<br />

subj. to<br />

⎧<br />

⎨<br />

⎩<br />

JN−1(xN−1, uN−1) max<br />

w a N −1 ∈Wa , w p<br />

FxN−1 + GuN−1 ≤ f<br />

A(w p<br />

∀wa N−1 ∈ Wa , w p<br />

N−1<br />

N −1 ∈Wp<br />

N−1 )xN−1 + B(w p<br />

N−1 )uN−1 + Ewa N−1 ∈ Xf<br />

∈ Wp<br />

⎧<br />

⎨<br />

⎩<br />

QxN−1p + RuN−1p+<br />

+P(A(w p<br />

N−1 )xN−1+<br />

+B(w p<br />

N−1 )uN−1 + Ew a N−1 )p<br />

(12.80)<br />

⎫<br />

⎬<br />

⎭ .<br />

(12.81)<br />

The cost function in the maximization problem (12.81) is piecewise affine and<br />

, wp and the parameters<br />

convex with respect to the optimization vector wa N−1 N−1<br />

uN−1, xN−1. Moreover, the constraints in the minimization problem (12.80) are<br />

linear in (uN−1, xN−1) for all vectors wa N−1 , wp<br />

N−1 . Therefore, by Corollary 12.1,

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!