02.08.2013 Views

Model Predictive Control

Model Predictive Control

Model Predictive Control

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

7.3 Solution via Recursive Approach 121<br />

cost q(xj−1, uj−1) and the optimal cost-to-go J ∗ j→N (xj) from time j onwards.<br />

J ∗ j−1→N (xj−1) = min<br />

uj−1<br />

q(xj−1, uj−1) + J ∗ j→N (xj)<br />

subj. to xj = g(xj−1, uj−1)<br />

h(xj−1, uj−1) ≤ 0<br />

xj ∈ Xj→N<br />

(7.13)<br />

Here the only decision variable left for the optimization is uj−1, the input at time<br />

j − 1. All the other inputs u∗ j , . . .,u∗ N−1 have already been selected optimally to<br />

yield the optimal cost-to-go J ∗ j→N (xj). We can rewrite (7.13) as<br />

J ∗ j−1→N (xj−1) = min<br />

uj−1<br />

q(xj−1, uj−1) + J ∗ j→N (g(xj−1, uj−1))<br />

subj. to h(xj−1, uj−1) ≤ 0<br />

g(xj−1, uj−1) ∈ Xj→N,<br />

(7.14)<br />

making the dependence of xj on the initial state xj−1 explicit.<br />

The optimization problem (7.14) suggests the following recursive algorithm<br />

backwards in time to determine the optimal control law. We start with the terminal<br />

cost and constraint<br />

and then proceed backwards<br />

J ∗ N−1→N (xN−1) = min<br />

uN −1<br />

.<br />

J ∗ 0→N (x0) = min<br />

u0<br />

J ∗ N→N(xN) = p(xN) (7.15)<br />

XN→N = Xf, (7.16)<br />

q(xN−1, uN−1) + J ∗ N→N (g(xN−1, uN−1))<br />

subj. to h(xN−1, uN−1) ≤ 0,<br />

g(xN−1, uN−1) ∈ XN→N<br />

q(x0, u0) + J ∗ 1→N (g(x0, u0))<br />

subj. to h(x0, u0) ≤ 0,<br />

g(x0, u0) ∈ X1→N<br />

x0 = x(0).<br />

(7.17)<br />

This algorithm, popularized by Bellman, is referred to as dynamic programming.<br />

The dynamic programming problem is appealing because it can be stated compactly<br />

and because at each step the optimization takes place over one element uj of the<br />

optimization vector only. This optimization is rather complex, however. It is not<br />

a standard nonlinear programming problem but we have to construct the optimal<br />

cost-to-go J ∗ j→N (xj), a function defined over the subset Xj→N of the state space.<br />

In a few special cases we know the type of function and we can find it efficiently.<br />

For example, in the next chapter we will cover the case when the system is linear<br />

and the cost is quadratic. Then the optimal cost-to-go is also quadratic and can be<br />

constructed rather easily. Later in the book we will show that, when constraints<br />

are added to this problem, the optimal cost-to-go becomes piecewise quadratic and<br />

efficient algorithms for its construction are also available.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!