Quantitative Local Analysis of Nonlinear Systems - University of ...
Quantitative Local Analysis of Nonlinear Systems - University of ...
Quantitative Local Analysis of Nonlinear Systems - University of ...
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>Quantitative</strong> <strong>Local</strong> <strong>Analysis</strong> <strong>of</strong> <strong>Nonlinear</strong> <strong>Systems</strong><br />
by<br />
Ufuk Topcu<br />
B.S. (Bogazici <strong>University</strong>, Istanbul) 2003<br />
M.S. (<strong>University</strong> <strong>of</strong> California, Irvine) 2005<br />
A dissertation submitted in partial satisfaction<br />
<strong>of</strong> the requirements for the degree <strong>of</strong><br />
Doctor <strong>of</strong> Philosophy<br />
in<br />
Engineering - Mechanical Engineering<br />
in the<br />
GRADUATE DIVISION<br />
<strong>of</strong> the<br />
UNIVERSITY OF CALIFORNIA, BERKELEY<br />
Committee in charge:<br />
Pr<strong>of</strong>essor Andrew K. Packard, Chair<br />
Pr<strong>of</strong>essor Kameshwar Poolla<br />
Pr<strong>of</strong>essor Laurent El Ghaoui<br />
Fall 2008
The dissertation <strong>of</strong> Ufuk Topcu is approved.<br />
Chair<br />
Date<br />
Date<br />
Date<br />
<strong>University</strong> <strong>of</strong> California, Berkeley<br />
Fall 2008
<strong>Quantitative</strong> <strong>Local</strong> <strong>Analysis</strong> <strong>of</strong> <strong>Nonlinear</strong> <strong>Systems</strong><br />
Copyright c○ 2008<br />
by<br />
Ufuk Topcu
Abstract<br />
<strong>Quantitative</strong> <strong>Local</strong> <strong>Analysis</strong> <strong>of</strong> <strong>Nonlinear</strong> <strong>Systems</strong><br />
by<br />
Ufuk Topcu<br />
Doctor <strong>of</strong> Philosophy in Engineering - Mechanical Engineering<br />
<strong>University</strong> <strong>of</strong> California, Berkeley<br />
Pr<strong>of</strong>essor Andrew K. Packard, Chair<br />
This thesis investigates quantitative methods for local robustness and performance<br />
analysis <strong>of</strong> nonlinear dynamical systems with polynomial vector fields. We propose measures<br />
to quantify systems’ robustness against uncertainties in initial conditions (regions-<strong>of</strong>attraction)<br />
and external disturbances (local reachability/gain analysis). S-procedure and<br />
sum-<strong>of</strong>-squares relaxations are used to translate Lyapunov-type characterizations to sum<strong>of</strong>-squares<br />
optimization problems. These problems are typically bilinear/nonconvex (due to<br />
local analysis rather than global) and their size grows rapidly with state/uncertainty space<br />
dimension.<br />
Our approach is based on exploiting system theoretic interpretations <strong>of</strong> these optimization<br />
problems to reduce their complexity. We propose a methodology incorporating simulation<br />
data in formal pro<strong>of</strong> construction enabling more reliable and efficient search for robustness<br />
and performance certificates compared to the direct use <strong>of</strong> general purpose solvers. This<br />
1
technique is adapted both to region-<strong>of</strong>-attraction and reachability analysis. We extend the<br />
analysis to uncertain systems by taking an intentionally simplistic and potentially conservative<br />
route, namely employing parameter-independent, rather than parameter-dependent,<br />
certificates. The conservatism is simply reduced by a branch-and-bound type refinement<br />
procedure.<br />
The main thrust <strong>of</strong> these methods is their suitability for parallel computing<br />
achieved by decomposing otherwise challenging problems into relatively tractable smaller<br />
ones. We demonstrate proposed methods on several small/medium size examples in each<br />
chapter and apply each method to a benchmark example with an uncertain short period<br />
pitch axis model <strong>of</strong> an aircraft.<br />
Additional practical issues leading to a more rigorous basis for the proposed methodology<br />
as well as promising further research topics are also addressed. We show that stability<br />
<strong>of</strong> linearized dynamics is not only necessary but also sufficient for the feasibility <strong>of</strong> the<br />
formulations in region-<strong>of</strong>-attraction analysis. Furthermore, we generalize an upper bound<br />
refinement procedure in local reachability/gain analysis which effectively generates nonpolynomial<br />
certificates from polynomial ones. Finally, broader applicability <strong>of</strong> optimizationbased<br />
tools stringently depends on the availability <strong>of</strong> scalable/hierarchial algorithms. As<br />
an initial step toward this direction, we propose a local small-gain theorem and apply to<br />
stability region analysis in the presence <strong>of</strong> unmodeled dynamics.<br />
Pr<strong>of</strong>essor Andrew K. Packard<br />
Dissertation Committee Chair<br />
2
To my parents Züleyha and Fevzi<br />
for their support and patience,<br />
and<br />
to my wife Zeynep<br />
for sharing the joy <strong>of</strong> life with me.<br />
i
Contents<br />
Contents<br />
ii<br />
List <strong>of</strong> Figures<br />
v<br />
List <strong>of</strong> Tables<br />
ix<br />
1 Introduction 1<br />
1.1 Thesis Overview and Contributions . . . . . . . . . . . . . . . . . . . . . . . 1<br />
1.2 Summary <strong>of</strong> Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6<br />
2 Background 8<br />
2.1 Semidefinite and Linear Programming . . . . . . . . . . . . . . . . . . . . . 9<br />
2.2 Sum-<strong>of</strong>-Squares Polynomials and Sum-<strong>of</strong>-Squares Programming . . . . . . . 12<br />
2.3 Generalized S-procedure and Positivstellensatz . . . . . . . . . . . . . . . . 15<br />
2.4 Preliminary Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20<br />
2.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21<br />
ii
3 Simulation-Aided Region-<strong>of</strong>-Attraction <strong>Analysis</strong> 22<br />
3.1 Characterization <strong>of</strong> Invariant Subsets <strong>of</strong> ROA and Bilinear SOS Problem . . 24<br />
3.2 Relaxation <strong>of</strong> the Bilinear SOS Problem Using Simulation Data . . . . . . . 27<br />
3.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36<br />
3.4 Critique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />
3.5 Sanity Check: Does Linear Stability Imply Existence <strong>of</strong> SOS Certificates . 48<br />
3.6 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52<br />
3.7 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54<br />
4 <strong>Local</strong> Stability <strong>Analysis</strong> for Uncertain <strong>Nonlinear</strong> <strong>Systems</strong> 55<br />
4.1 Setup and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57<br />
4.2 Computation <strong>of</strong> Robustly Invariant Sets . . . . . . . . . . . . . . . . . . . . 60<br />
4.3 Implementation Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64<br />
4.4 Sanity Check: Does Robust Stability Imply Existence <strong>of</strong> SOS Certificates . 67<br />
4.5 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69<br />
4.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75<br />
4.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76<br />
5 Extensions <strong>of</strong> the Robust Region-<strong>of</strong>-Attraction <strong>Analysis</strong>: Refinements<br />
and Non-affine Uncertainty Dependence 77<br />
iii
5.1 Setup and Estimation <strong>of</strong> the Robust ROA <strong>of</strong> <strong>Systems</strong> with Affine Parametric<br />
Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79<br />
5.2 Polynomial Parametric Uncertainty . . . . . . . . . . . . . . . . . . . . . . . 84<br />
5.3 Branch-and-Bound Type Refinement in the Parameter Space . . . . . . . . 88<br />
5.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92<br />
5.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98<br />
6 Reachability and <strong>Local</strong> Gain <strong>Analysis</strong> for <strong>Nonlinear</strong> Dynamical <strong>Systems</strong>100<br />
6.1 Upper and Lower Bounds for the Reachable Set and <strong>Local</strong> Input-Output Gains101<br />
6.2 Upper Bound Refinement for Reachability and L 2 → L 2 Gain <strong>Analysis</strong> . . . 107<br />
6.3 Simulation-Based Relaxation for the Bilinear SOS Problem in Reachability<br />
<strong>Analysis</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109<br />
6.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115<br />
6.5 Region-<strong>of</strong>-Attraction <strong>Analysis</strong> for <strong>Systems</strong> with Unmodeled Dynamics using<br />
a <strong>Local</strong> Small-Gain Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 119<br />
6.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125<br />
7 Conclusions 126<br />
Bibliography 129<br />
iv
List <strong>of</strong> Figures<br />
3.1 Sets Y and B and points generated by H&R algorithm. Φ j and b j denote<br />
the j th column <strong>of</strong> Φ and j th component <strong>of</strong> b. . . . . . . . . . . . . . . . . . . 32<br />
3.2 Histograms <strong>of</strong> β ∗ L before CW Opt (black bars) and β∗ L<br />
after CW Opt (white<br />
bars) for ∂(V ) = 2 (top), 4 (middle), and 6 (bottom). . . . . . . . . . . . . 37<br />
3.3 The invariant subsets <strong>of</strong> the ROA (dot: ∂(V ) = 2, dash: ∂(V ) = 4, and solid:<br />
∂(V ) = 6 (indistinguishable from the outermost curve for the limit cycle)). . 38<br />
3.4 Invariant subset <strong>of</strong> the ROA for (E 6 ) reported in [32] (solid surface) and that<br />
computed using the sequential procedure from section 3.3.2 (dotted surface). 41<br />
3.5 Invariant subset <strong>of</strong> the ROA for (E 7 ) reported in [75] (thick solid curve) and<br />
that computed using the sequential procedure from section 3.3.2 (Ω V,γ ∗) (thin<br />
solid curve), and system trajectories (dash-dot curves). . . . . . . . . . . . . 41<br />
3.6 Histograms <strong>of</strong> β ∗ L before CW Opt (black bars) and β∗ L<br />
after CW Opt (white<br />
bars) for ∂(V ) = 2 (top) and 4 (bottom). . . . . . . . . . . . . . . . . . . . 43<br />
3.7 A slice <strong>of</strong> the invariant subset <strong>of</strong> the ROA (solid curve) and initial conditions<br />
(with x 2 = 0 and x 4 = 0) for diverging trajectories (dots). . . . . . . . . . . 44<br />
v
4.1 Invariant subsets <strong>of</strong> ROA reported in [18] (black curve) and those computed<br />
solving the problem in (4.13) with ∂(V ) = 2 (blue curve) and ∂(V ) = 4 (green<br />
curve) along with initial conditions (red stars) for some divergent trajectories<br />
<strong>of</strong> the system corresponding to α = 1. . . . . . . . . . . . . . . . . . . . . . 70<br />
4.2 Invariant subsets <strong>of</strong> ROA with ∂(V ) = 4 (green curve) and ∂(V ) = 6 (blue<br />
curve) along with the unstable limit cycle (red curves) <strong>of</strong> the system corresponding<br />
to α = −1.0, −0.8, . . . , 0.8, 1.0. . . . . . . . . . . . . . . . . . . . . 72<br />
4.3 Invariant subsets <strong>of</strong> ROA with ∂(V ) = 2 (green) and ∂(V ) = 4 (blue) along<br />
with initial conditions (red stars) for divergent trajectories. . . . . . . . . . 73<br />
5.1 Polytopic cover for {(ζ, ψ) ∈ R 2 : ζ ∈ [0, 1], ψ = ζ 2 } with 2 cells (red) and<br />
4 cells (yellow). Black curve is the set {(ζ, ψ) ∈ R 2 : ζ ∈ [0, 1], ψ = ζ 2 }. . 93<br />
5.2 Curves with “◦” are for the lower bounds obtained by directly solving (5.6)<br />
with D taken as the vertices <strong>of</strong> the corresponding cell, curves with “⋄” are for<br />
the lower bounds obtained by applying the sequential procedure from section<br />
4.3 by taking ∆ sample (in the first step) as the center <strong>of</strong> the corresponding<br />
cell, and curves with “×” (in the top figure only) and “⋆” show β nc and β lp ,<br />
respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94<br />
5.3 Estimates <strong>of</strong> the robust ROA: from [19] (black), using the branch-and-bound<br />
based method for ∂(V ) = 2 (red) and ∂(V ) = 4 (green). . . . . . . . . . . . 95<br />
vi
5.4 Lower bounds for β∆ ∗ with ∂(V ) = 2 (green solid curve with “×” marker)<br />
and ∂(V ) = 4 (blue solid curve with “⋄” marker) and β nc (red solid curve<br />
with “◦” marker) computed at the centers <strong>of</strong> the cells generated by the B&B<br />
Algorithm for the ∂(V ) = 4 run. Dashed curves are for (computed values <strong>of</strong>)<br />
β {δ} where δ is the center <strong>of</strong> the cell with the smallest lower bound at the<br />
corresponding step <strong>of</strong> the B&B refinement procedure for ∂(V ) = 2 (green<br />
curve with “×” marker) and ∂(V ) = 4 (blue curve with “⋄” marker). . . . . 96<br />
5.5 Final partition generated by the B&B algorithm for the ∂(V ) = 4 run. . . . 97<br />
5.6 Controlled short period aircraft dynamics with uncertain first order linear<br />
time-invariant dynamics (δ p := (δ 1 , δ 2 )). . . . . . . . . . . . . . . . . . . . . 97<br />
6.1 Bounds on reachable sets due to disturbance w with ‖w‖ 2 2 ≤ R2 for Example<br />
1 without delay (blue curve with dots: before refinement, green curve with<br />
×: after refinement, red curve with ⋄: lower bound). . . . . . . . . . . . . . 115<br />
6.2 Bounds on reachable sets due to disturbance w with ‖w‖ 2 2 ≤ R2 for Example<br />
2 (blue curve with dots: before refinement, green curve with ×: after<br />
refinement, red curve with ⋄: lower bound, and black circles: failed PENBMI<br />
runs). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116<br />
6.3 Bounds on reachable sets due to disturbance w with ‖w‖ 2 2 ≤ R2 for Example<br />
2 (blue curve with dots: before refinement, green curve with ×: after<br />
refinement, red curve with ⋄: lower bound, and black circles: failed PENBMI<br />
runs). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117<br />
vii
6.4 Upper bounds for ∂(V ) = 2 (with ⋄) and ∂(V ) = 4 (with ×) before the<br />
refinement (blue curves) and after the refinement (green curves) along with<br />
the lower bounds (red curve). . . . . . . . . . . . . . . . . . . . . . . . . . . 118<br />
6.5 Feedback interconnection <strong>of</strong> ∆ and M. . . . . . . . . . . . . . . . . . . . . . 119<br />
6.6 Controlled short period aircraft dynamics with unmodeled dynamics (δ p :=<br />
(δ 1 , δ 2 )). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124<br />
viii
List <strong>of</strong> Tables<br />
3.1 Parameters used in and results <strong>of</strong> SimLF G and CW Opt algorithms. . . . . 36<br />
3.2 Volume ratios for (E 1 )-(E 7 ). . . . . . . . . . . . . . . . . . . . . . . . . . . . 40<br />
3.3 Results <strong>of</strong> SimLF G and CW Opt algorithms. Upper bounds are established<br />
by a separate run <strong>of</strong> SimLF G algorithm with N conv = 3000.<br />
The upper<br />
bound for ∂(V ) = 4 is by a divergent trajectory whereas as the upper bound<br />
is by the infeasibility <strong>of</strong> (3.6), (3.11), and (3.12) for the given β value. Representative<br />
computation times are on 2.0 GHz desktop PC. . . . . . . . . . 43<br />
4.1 N SDP (left columns) and N decision (right columns) for different values <strong>of</strong> n<br />
and 2d. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59<br />
4.2 Number <strong>of</strong> decision variables in (4.13) (top entry in each cell <strong>of</strong> the table)<br />
and the number <strong>of</strong> decision variables in (4.15) (bottom entry in each cell <strong>of</strong><br />
the table) for ∂(V ) = 2, ∂(s 2δ ) = 2, and ∂(s 3δ ) = 0. . . . . . . . . . . . . . 67<br />
4.3 Optimal values <strong>of</strong> β in the problem (4.13) with different values <strong>of</strong> µ and<br />
∂(V ) = 2 and 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70<br />
ix
4.4 Optimal values <strong>of</strong> β in the problem (4.13) with different values <strong>of</strong> µ and<br />
∂(V ) = 4 and 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71<br />
4.5 Optimal value <strong>of</strong> β in the first step, β sample , and β subopt with µ for ∂(V ) = 2<br />
and 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74<br />
6.1 Computed (sub)optimal values <strong>of</strong> β with p(x) = x T x (with ∂(V ) = 2 /<br />
∂(V ) = 4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125<br />
x
List <strong>of</strong> Symbols<br />
R<br />
R n<br />
R n×m<br />
Z<br />
Z +<br />
C 1<br />
M ≽ 0 (x ≽ 0)<br />
The real numbers<br />
Real n-vectors<br />
Real n-by-m matrices<br />
The ring <strong>of</strong> integers<br />
Positive integers<br />
The set <strong>of</strong> continuously differentiable real valued functions on R n<br />
M symmetric and is positive semidefinite<br />
(for x ∈ R n , entries <strong>of</strong> x are non-negative)<br />
M ≻ 0 (x ≻ 0)<br />
M is symmetric and positive definite<br />
(for x ∈ R n , entries <strong>of</strong> x are positive)<br />
R[x]<br />
Σ[x]<br />
∂(π)<br />
Ω f,η<br />
The set <strong>of</strong> polynomials (<strong>of</strong> certain finite degree) in x with real coefficients<br />
The set <strong>of</strong> sum-<strong>of</strong>-squares polynomials in R[x]<br />
The the degree <strong>of</strong> π ∈ R[x]<br />
The η-sublevel set <strong>of</strong> f, i.e., for η ∈ R and f : R n → R<br />
Ω f,γ := {x ∈ R n<br />
: f(x) ≤ η},<br />
xi
Acknowledgements<br />
It has been a short stay at Berkeley and I have enjoyed every minute <strong>of</strong> it.<br />
I would like to take this opportunity to thank the following people.<br />
I would like to express my gratitude to Andy Packard for making my studies at Berkeley<br />
possible and being a role model with his passion for research. I want to extend heartfelt<br />
thanks to Laurent El Ghaoui for long discussions on optimization and his support and<br />
guidance. Special thanks to Kameshwar Poolla for being a great mentor and his interest in<br />
my academic progression. I also want to thank Pete Seiler for valuable discussions and for<br />
providing me a summer internship opportunity at the Honeywell Labs.<br />
Thanks to all at the BCCI for their friendship, support, and creating a pleasant working<br />
environment.<br />
Finally, I would like to acknowledge the financial support from the Air Force Office <strong>of</strong><br />
Scientific Research under grant/contract number FA9550-05-1-0266 and thank the program<br />
managers Sharon Heise and Scott Wells.<br />
xii
Chapter 1<br />
Introduction<br />
1.1 Thesis Overview and Contributions<br />
The objective <strong>of</strong> this thesis is to develop quantitative robustness and performance analysis<br />
tools for nonlinear dynamical systems. <strong>Nonlinear</strong> systems possess local properties that<br />
are not global. For example, an asymptotically stable equilibrium point may not be globally<br />
attractive or input-output properties may radically vary for different ranges <strong>of</strong> disturbance<br />
levels. Therefore, we emphasize local analysis rather than global and focus on the following<br />
measures <strong>of</strong> robustness: (i) inner estimates <strong>of</strong> the region-<strong>of</strong>-attraction <strong>of</strong> an equilibrium<br />
point; (ii) outer estimates <strong>of</strong> reachable sets under bounded disturbances; (iii) upper bounds<br />
for local input-output gains. In each case, we account for modeling uncertainties and extend<br />
the applicability <strong>of</strong> proposed tools to uncertain systems.<br />
Using Lyapunov/storage function type characterizations for these robustness measures<br />
and S-procedure type relaxations, analysis questions are translated to verification <strong>of</strong> global<br />
nonnegativity <strong>of</strong> functions satisfying certain properties.<br />
Polynomial optimization (more<br />
1
specifically sum-<strong>of</strong>-squares programming) provides an effective framework for this verification.<br />
Therefore, we restrict our attention to systems with polynomial vector fields and<br />
polynomial robustness certificates and obtain bilinear sum-<strong>of</strong>-squares programming problems<br />
with two challenging features: nonconvexity and rapid growth <strong>of</strong> the size with increasing<br />
state and/or uncertainty space dimension. Our approach is based on exploiting system<br />
theoretic interpretations <strong>of</strong> these problems to reduce their complexity. For example, we propose<br />
a methodology incorporating simulations in formal pro<strong>of</strong> generation by constructing<br />
convex outer-bounds on the set <strong>of</strong> feasible Lyapunov/storage functions. Lyapunov/storage<br />
function candidates drawn from this outer-bound set either directly qualify as robustness<br />
certificates or can be used as initial seeds for further bilinear search with more efficient and<br />
reliable performance. Another example comes from the realization that, in the presence <strong>of</strong><br />
parametric uncertainties, the dependence <strong>of</strong> the constraints in the resulting optimization<br />
problems on the two groups <strong>of</strong> indeterminate variables, namely states and uncertain parameters,<br />
is not the same. In order to take advantage <strong>of</strong> this difference, we choose not to<br />
reflect the dependence <strong>of</strong> robustness properties in the form <strong>of</strong> the Lyapunov/storage function<br />
candidates and employ parameter-independent certificates. The conservatism due to<br />
this choice is simply reduced by sub-partitioning the uncertainty set by a branch-and-bound<br />
type refinement procedure. A common feature <strong>of</strong> proposed techniques is that most <strong>of</strong> the<br />
computation is suitable for “embarrassingly parallel” computation [1].<br />
The goal <strong>of</strong> this work is to lay out a feasible path toward computationally efficient and<br />
reliable schemes for analyzing the behavior <strong>of</strong> nonlinear systems with 15 states, 5 uncertain<br />
parameters, and cubic polynomial vector fields. Heading toward this goal, restrictions and<br />
choices (on the system description or the form <strong>of</strong> certificates) we make throughout this<br />
2
thesis are motivated by the trade-<strong>of</strong>f between system complexity, available computational<br />
resources, and strength <strong>of</strong> the pro<strong>of</strong>s.<br />
The content and the contributions <strong>of</strong> respective chapters are outlined below.<br />
Chapter 2 presents a summary <strong>of</strong> background material focusing on the aspects needed<br />
for the development in the subsequent chapters.<br />
Mainly, this material coupled with<br />
Lyapunov-type theorems introduced in respective chapters will be used to translate system<br />
analysis questions to numerical optimization problems.<br />
A brief overview <strong>of</strong> semidefinite<br />
programming centered on distinguishing properties <strong>of</strong> affine and bilinear semidefinite programs<br />
is followed by the introduction <strong>of</strong> sum-<strong>of</strong>-squares polynomials and sum-<strong>of</strong>-squares<br />
programming. Finally, a generalization <strong>of</strong> the S-procedure, a central tool for handling set<br />
containment conditions, and its link to the “Positivstellensatz” are discussed.<br />
In Chapter 3, we propose a method for computing invariant subsets <strong>of</strong> the region<strong>of</strong>-attraction<br />
for asymptotically stable equilibrium points <strong>of</strong> dynamical systems with polynomial<br />
vector fields. We use polynomial Lyapunov functions as local stability certificates<br />
whose certain sublevel sets are invariant subsets <strong>of</strong> the region-<strong>of</strong>-attraction. Similar to many<br />
local analysis problems, this is a nonconvex problem. Furthermore, its sum-<strong>of</strong>-squares relaxation<br />
leads to a bilinear optimization problem. We develop a method utilizing information<br />
from simulations for easily generating Lyapunov function candidates. For a given Lyapunov<br />
function candidate, checking its feasibility and assessing the size <strong>of</strong> the associated invariant<br />
subset are affine sum-<strong>of</strong>-squares optimization problems. Solutions to these problems provide<br />
invariant subsets <strong>of</strong> the region-<strong>of</strong>-attraction directly and/or they can further be used<br />
as seeds for local bilinear search schemes or iterative coordinate-wise affine search schemes<br />
for improved performance <strong>of</strong> these schemes. We report promising results in all these di-<br />
3
ections. Finally, it is shown that, for systems with cubic polynomial vector fields, if the<br />
linearized dynamics are exponentially stable, then the bilinear sum-<strong>of</strong>-squares programming<br />
problems (formulated to estimate the region-<strong>of</strong>-attraction) are always feasible. The material<br />
presented in this chapter is partly parallel to [73, 70].<br />
Chapter 4 is dedicated to developing a method to compute provably invariant subsets <strong>of</strong><br />
the region-<strong>of</strong>-attraction for asymptotically stable equilibrium points <strong>of</strong> uncertain nonlinear<br />
dynamical systems. We consider polynomial dynamics with perturbations that either obey<br />
local polynomial bounds or are described by uncertain parameters multiplying polynomial<br />
terms in the vector field. This uncertainty description is motivated by both incapabilities<br />
in modeling, as well as bilinearity and dimension <strong>of</strong> the sum-<strong>of</strong>-squares programming problems<br />
whose solutions provide invariant subsets <strong>of</strong> the robust region-<strong>of</strong>-attraction. Finally,<br />
we discuss a sequential suboptimal solution technique suitable for parallel computing. The<br />
technique proposed in this chapter, coupled with the sequential implementation, is conservative.<br />
Yet, its computational complexity is lower than other methods with more elegant<br />
formulations based on parameter-dependent Lyapunov functions. Therefore, it provides a<br />
relatively feasible infrastructure for the extensions in chapter 5. More importantly, lessons<br />
learnt from the study reported in this chapter apply to other analysis (and possibly synthesis)<br />
questions with parametric uncertainty. The material presented in this chapter is partly<br />
parallel to [68, 67].<br />
In Chapter 5, we extend the applicability <strong>of</strong> the method proposed in Chapter 4 to<br />
handle systems with non-affine uncertainty dependence and to reduce the conservatism associated<br />
with using a common Lyapunov function for an entire family <strong>of</strong> uncertain systems.<br />
Non-affine appearances <strong>of</strong> uncertain parameters in the vector field are replaced by artificial<br />
parameters and the graph <strong>of</strong> non-affine functions <strong>of</strong> uncertain parameters are covered<br />
4
y bounded polytopes. Conservatism (due to parameter-independent Lyapunov functions)<br />
is simply reduced by partitioning the uncertainty set using a branch-and-bound type refinement<br />
procedure.<br />
The approach <strong>of</strong>fers the following advantages: (i) The parameterdependent<br />
Lyapunov functions achieved by uncertainty-space partitioning do not require<br />
an a priori parametrization <strong>of</strong> the Lyapunov function in the uncertain parameters.<br />
(ii)<br />
It leads to optimization problems with smaller semidefiniteness constraints since uncertain<br />
parameters do not explicitly appear in the constraints. Although the size <strong>of</strong> the semidefinite<br />
programming constraints does not increase with the number <strong>of</strong> uncertain parameters, their<br />
number does and the problem becomes challenging as the number <strong>of</strong> uncertain parameters<br />
increases. (iii) A sequential implementation for computing suboptimal solutions which<br />
decouples these constraints into smaller, independent problems arises naturally.<br />
This is<br />
suitable for trivial parallel computing <strong>of</strong>fering a major advantage over approaches utilizing<br />
parameter-dependent Lyapunov functions. Most <strong>of</strong> the results <strong>of</strong> this chapter are reported<br />
in [71, 72].<br />
In Chapter 6, we analyze reachability properties and local input/output gains <strong>of</strong> systems<br />
with polynomial vector fields.<br />
Upper bounds for the reachable set and nonlinear<br />
system gains are characterized using Lyapunov/storage functions and computed solving bilinear<br />
sum-<strong>of</strong>-squares programming problems. A procedure to refine the upper bounds by<br />
transforming polynomial Lyapunov/storage functions to non-polynomial Lyapunov functions<br />
is developed. The simulation-aided analysis methodology is adapted to reachability<br />
and local gain analysis. Finally, a local small-gain theorem is proposed and applied to the<br />
robust region-<strong>of</strong>-attraction analysis for systems with unmodeled dynamics.<br />
Parts <strong>of</strong> the<br />
material presented in this chapter is reported in [69]<br />
5
1.2 Summary <strong>of</strong> Examples<br />
Listed below are the examples in this thesis.<br />
For all examples, the problem data<br />
and results (mainly Lyapunov functions and corresponding multipliers) are available at<br />
http://jagger.me.berkeley.edu/~utopcu/dissertation.<br />
1) Simulation-aided region-<strong>of</strong>-attraction analysis<br />
• Van der Pol dynamics §3.3.1<br />
• Examples from [20, 75, 33, 32] § 3.3.2<br />
• Controlled short period aircraft dynamics §3.3.3<br />
• Pendubot dynamics §3.3.4<br />
• Closed-loop dynamics with nonlinear observer based controller §3.3.5<br />
2) Region-<strong>of</strong>-attraction analysis for uncertain nonlinear systems<br />
• Uncertain Van der Pol dynamics and examples from [18, 78] §4.5.1<br />
• Uncertain controlled short period aircraft dynamics § 4.5.2<br />
3) Extensions <strong>of</strong> robust region-<strong>of</strong>-attraction analysis<br />
• An example from [19] § 5.4.1<br />
• Uncertain controlled short period aircraft dynamics § 5.4.2<br />
• Uncertain controlled short period aircraft dynamics with first-order uncertain dynamics<br />
§5.4.3<br />
4) <strong>Local</strong> reachability and gain analysis<br />
• Reachability analysis for an example from the literature §6.4<br />
6
• Reachability analysis for Pendubot dynamics §6.4<br />
• L 2 → L 2 gain analysis for a system with adaptive controller §6.4<br />
• Robust region-<strong>of</strong>-attraction analysis <strong>of</strong> controlled short period aircraft dynamics<br />
with unmodeled dynamics §6.5.1<br />
7
Chapter 2<br />
Background<br />
The goal in the following chapters is to develop computational tools for estimating<br />
certain robustness and performance measures (e.g. regions-<strong>of</strong>-attraction and input-output<br />
gains) for nonlinear systems. The general strategy for computing these measures consists<br />
<strong>of</strong> three main steps:<br />
i. Characterize these measures using Lyapunov-type functions that satisfy certain conditions<br />
which can be translated to set containment (or set emptiness) questions.<br />
ii. Obtain “S-procedure” type sufficient conditions for the set containment constraints.<br />
iii. Search for Lyapunov-type certificates and S-procedure multipliers by solving numerical<br />
optimization (or feasibility) problems.<br />
In the following sections <strong>of</strong> this chapter, we present a summary <strong>of</strong> background material<br />
needed to carry out this plan.<br />
8
2.1 Semidefinite and Linear Programming<br />
2.1.1 Semidefinite Programming<br />
A semidefinite program (SDP) 1 is an optimization problem with linear objective and<br />
matrix semidefiniteness constraint. Formally, for c ∈ R n and the symmetric matrix valued<br />
map F : R n → R m×m , a SDP can be written as<br />
We mainly deal with two types <strong>of</strong> SDPs:<br />
min c T x subject to<br />
x∈R n<br />
F (x) ≽ 0.<br />
• Linear (affine) SDPs: For symmetric matrices F 0 , F 1 , . . . , F n ∈ R m×m , the map F<br />
takes the form F (x) = F 0 + ∑ n<br />
i=1 x iF i . In this case, the constraint F (x) ≽ 0 is called<br />
a linear matrix inequality (LMI).<br />
• Bilinear SDPs: For symmetric matrices F 0 , F i , and F ij in R m×m with i = 1, . . . , n<br />
and j = 1, . . . , m, the map F takes the form<br />
n∑ n∑ m∑<br />
F (x) = F (y, z) = F 0 + y i F i + y i z j F ij . (2.1)<br />
i=1 i=1 j=1<br />
In this case, the constraint F (x) ≽ 0 is called a bilinear matrix inequality (BMI).<br />
Remarks 2.1.1. Although the term “semidefinite program” is generally used for problems<br />
with affine objective function and LMI constraints [15], we will use it here to mean optimization<br />
problems with affine objective function and general matrix inequality constraints<br />
and specify its type if needed.<br />
⊳<br />
1 We will use the abbreviation SDP to mean both semidefinite program and semidefinite programming.<br />
Similarly, LP will be used for linear program and linear programming.<br />
9
Affine SDPs are convex optimization problems, considered to be computationally<br />
tractable with polynomial time algorithms [44, 15, 80].<br />
There are several reliable and<br />
relatively efficient solvers for affine SDPs including SeDuMi [56], DSDP [10], and SDPT3<br />
[65].<br />
2<br />
On the other hand, bilinear SDPs are nonconvex and NP-hard in general [66].<br />
Consequently, the state-<strong>of</strong>-the-art <strong>of</strong> the solvers for bilinear SDPs is far behind that for the<br />
linear ones. We now review several strategies to solve bilinear SDPs.<br />
2.1.2 Solution Strategies for Problems with Bilinear Matrix Inequalities<br />
Optimization problems with BMIs provide an effective framework for many problems<br />
in controls, e.g. µ-synthesis [6] and static output feedback [34]. Although there is no<br />
general purpose efficient BMI solver, several methods have been proposed. Global optimization<br />
schemes based on the branch-and-bound algorithm [2] and generalized Benders<br />
decomposition are discussed in [30] and [11], respectively.<br />
See also [43] for a discussion<br />
<strong>of</strong> methodological, structural, and computational aspect <strong>of</strong> problem with BMI constraints.<br />
Recently PENBMI, a solver for bilinear SDPs, was introduced [39]. It is a local optimizer<br />
and its behavior (speed <strong>of</strong> convergence, quality <strong>of</strong> the local optimal point, etc.) depends on<br />
the point from which the optimization starts. On the other hand, note that although the<br />
function F in (2.1) is not affine in y and z jointly, it is affine in y when z is fixed and the<br />
constraint becomes a LMI in y and vice versa. This observation suggests a simple strategy<br />
to attack bilinear SDPs: first set z to some candidate solution, say ¯z, and optimize over y<br />
by solving<br />
ȳ := argmin y<br />
⎡<br />
c T ⎢<br />
⎣<br />
y<br />
¯z<br />
⎤<br />
⎥<br />
⎦<br />
subject to F (y, ¯z) ≽ 0,<br />
2 As these solvers have specific formats to describe SDPs, parsers such as YALMIP [41] and CVX [31] are<br />
extremely useful in setting up SDPs in these specific formats.<br />
10
then set y to ȳ and optimize over z by solving<br />
⎡ ⎤<br />
¯z = argmin z c T ȳ<br />
⎢ ⎥<br />
⎣ ⎦ subject to F (ȳ, z) ≽ 0,<br />
z<br />
and alternate between these two problems as long as there is satisfactory improvement in<br />
the solution. This two-way iterative search, which we call coordinate-wise affine search, is <strong>of</strong><br />
course a local search scheme and generated candidate solutions highly depend on the initial<br />
point from which the search starts (it may not reach the optimal solution or may require a<br />
large number <strong>of</strong> iterations to tightly approximate the optimal solution) [43]. Nevertheless,<br />
it is practically attractive since it only requires an affine SDP solver and it has been widely<br />
used by controls community (for example, D − K iteration in µ-synthesis [6] is based on<br />
alternating between the controller K and the D-scales). Moreover, our experience suggests<br />
that, coupled with efficient methods for generating high quality initial points (see chapter<br />
3), coordinate-wise affine search can be efficiently used to compute suboptimal solutions for<br />
problems with BMI constraints. Consequently, we implement coordinate-wise affine search<br />
schemes throughout this thesis and provide implementation details in the corresponding<br />
chapters.<br />
2.1.3 Linear Programming<br />
A linear program (LP) is an optimization problem with linear objective and affine constraints.<br />
Formally, for c ∈ R n , A ∈ R m×n , and b ∈ R m , a linear program can be written<br />
as<br />
min c T x subject to<br />
x∈R n<br />
Ax ≽ b.<br />
Several optimization problems in the following chapters will have both SDP constraints<br />
11
and LP constraints, namely for c ∈ R n , A ∈ R m×n , b ∈ R m , symmetric matrix valued map<br />
F : R n → R N SDP ×N SDP<br />
,<br />
min c T x subject to<br />
x∈R n<br />
Ax ≽ b<br />
(2.2)<br />
F (x) ≽ 0.<br />
Finally, note that constraints in (2.2) can equivalently be written as<br />
⎡<br />
⎤<br />
F (x)<br />
a T 1 x − b 1<br />
. ≽ 0,<br />
.. ⎢<br />
⎥<br />
⎣<br />
⎦<br />
a T mx − b m<br />
which is another (larger) SDP constraint (here a T i and b i denote the i-th row <strong>of</strong> A and b,<br />
respectively). Therefore, the optimization in (2.2) can be solved as a SDP.<br />
Both SDP and LP have been extensively studied and there are excellent references on<br />
the topic (including but not limited to) [9, 15, 79, 28, 42]. It is worth mentioning that the<br />
state-<strong>of</strong>-the-art <strong>of</strong> the solvers for LPs are far beyond that for SDPs [80, 9, 3].<br />
2.2 Sum-<strong>of</strong>-Squares Polynomials and Sum-<strong>of</strong>-Squares Programming<br />
Restricting our attention to systems with polynomial vectors fields and searching for<br />
unknown functions in pre-specified finite-dimensional subspaces <strong>of</strong> polynomials, we will<br />
formulate the search for robustness and performance certificates as optimization problems<br />
with polynomial nonnegativity constraints. However, verifying the global nonnegativity <strong>of</strong><br />
a multivariate polynomial is an hard problem [49]. On the other hand, if the polynomial can<br />
12
e represented as a sum <strong>of</strong> squares <strong>of</strong> finitely many polynomials, i.e., it is a sum-<strong>of</strong>-squares<br />
(SOS) polynomial, then it trivially follows that the polynomial is globally nonnegative. An<br />
appealing property <strong>of</strong> sum-<strong>of</strong>-squares polynomials is that checking whether a polynomial<br />
is sum-<strong>of</strong>-squares can be formulated as a SDP feasibility problem. Consequently, in the<br />
following sections, the strategy for dealing with global polynomial nonnegativity constraints<br />
will be replacing nonnegativity constraints by sum-<strong>of</strong>-squares conditions.<br />
We now review useful facts about SOS polynomials. Formally, a polynomial p in x ∈ R n<br />
is said to be SOS if, for p 1 , . . . , p M ∈ R[x] it can be decomposed in the form<br />
p(x) =<br />
M∑<br />
p i (x) 2 . (2.3)<br />
i=1<br />
Obviously, a SOS polynomial is globally nonnegative. Therefore, the set Σ[x] <strong>of</strong> SOS polynomials<br />
in x (<strong>of</strong> some fixed degree 3 ), defined as<br />
{<br />
Σ[x] := s ∈ R[x] : ∃M < ∞, p 1 , . . . , p M ∈ R[x] such that s(x) =<br />
}<br />
M∑<br />
p i (x) 2 ,<br />
i=1<br />
is a subset <strong>of</strong> the set <strong>of</strong> globally nonnegative polynomials. In fact, Σ[x] is a strict subset<br />
<strong>of</strong> the set <strong>of</strong> globally nonnegative polynomials except for univariate polynomials, quadratic<br />
polynomials and quartic polynomials in two variables [53].<br />
Let p be a polynomial in x <strong>of</strong> degree m and z(x) be a vector <strong>of</strong> monomials in x up<br />
to degree min{m e ∈ Z : m/2 ≤ m e }. Then, for some symmetric matrix Q, p can be<br />
decomposed as<br />
p(x) = z(x) T Qz(x). (2.4)<br />
Based on this fact, the following theorem provides a characterization <strong>of</strong> SOS polynomials.<br />
3 We do not specify the degree <strong>of</strong> polynomials in Σ[x] in notation unless it leads to confusion and use<br />
Σ[x] to denote the set <strong>of</strong> sum-<strong>of</strong>-squares polynomials in x ∈ R n <strong>of</strong> some fixed degree to be inferred from the<br />
context.<br />
13
Theorem 2.2.1. A polynomial p, in x ∈ R n <strong>of</strong> degree 2d, is SOS if and only if there exists<br />
Q ≽ such that p(x) = z(x) T Qz(x) where z is as defined above.<br />
⊳<br />
A pro<strong>of</strong> <strong>of</strong> Theorem 2.2.1 can be found in [22]. Here, we highlight a few useful observations<br />
that lead to the pro<strong>of</strong>. Consider that p is SOS, i.e., there exist an integer M > 0<br />
and polynomials p 1 , . . . , p M<br />
such that p(x) = ∑ M<br />
i=1 p i(x) 2 . Then, each p i is <strong>of</strong> degree d<br />
and therefore can be represented as α T i z(x) for some vector α i. Consequently, the matrix<br />
Q in the theorem statement can be taken as Q = ∑ M<br />
i=1 α iα T i which is clearly positive<br />
semidefinite. On the other hand, if p can be represented as p(x) = z(x) T Qz(x) with Q positive<br />
semidefinite, then p(x) can be written as p(x) = ∑ M<br />
i=1 p i(x) 2 , where, for i = 1, . . . , M,<br />
p i (x) = √ λ i q T i z(x), λ 1, . . . , λ M are eigenvalues <strong>of</strong> Q and q i ’s are corresponding eigenvectors.<br />
Another important observation is that for a given polynomial p, the entries <strong>of</strong> z may<br />
not be algebraically independent. Therefore, there may be multiple symmetric Q such that<br />
(2.4) holds. This is most easily demonstrated by an example.<br />
Example 2.2.1. Let x ∈ R 2 , z(x) = [ x 2 1 x 1x 2 x 2 2] T , Qp ∈ R 3×3 be a symmetric matrix,<br />
and p(x) = z(x) T Q p z(x). Then, for any λ ∈ R<br />
⎛<br />
⎞<br />
0 0 λ<br />
p(x) = z(x) T Q p z(x) = z(x) T Q p z(x) + z(x) T 0 2λ 0<br />
z(x)<br />
⎜<br />
⎟<br />
⎝<br />
⎠<br />
−λ 0 0<br />
} {{ }<br />
Q h (λ)<br />
since z(x) T Q h (λ)z(x) = 0 for all λ ∈ R which follows from the relation x 2 1 x2 2 = (x 1x 2 ) 2 .<br />
This degree <strong>of</strong> freedom <strong>of</strong> varying λ without violating p(x) = z(x) T Q(λ)z(x), where Q(λ) :=<br />
Q p +Q h (λ), leads to a procedure to search for positive semidefinite Q(λ) by choice <strong>of</strong> λ. The<br />
search for λ such that Q(λ) ≽ 0 is an affine SDP: Find λ ∈ R such that Q(λ) ≽ 0. In fact<br />
14
the reverse implication also holds: If there is no λ such that Q(λ) ≽ 0, then p is not SOS<br />
[49]. ⊳<br />
Theorem 2.2.2. [48, 49] The existence <strong>of</strong> a SOS decomposition <strong>of</strong> a polynomial in n<br />
variables <strong>of</strong> degree 2d can be decided by solving a feasibility SDP.<br />
⊳<br />
A useful corollary <strong>of</strong> Theorem 2.2.2 is that if the polynomial p contains decision variables,<br />
checking whether p is SOS for some choice <strong>of</strong> these decision variables is also a SDP. More<br />
precisely, if p is a polynomial in x parameterized by α ∈ R m , then the search for α such<br />
that p(x, α) ∈ Σ[x] is a SDP feasibility problem. If p is affine in α, then this is an affine<br />
SDP.<br />
By a SOS program (or SOS programming problem), we mean an optimization problem<br />
with linear objective and SOS constraints.<br />
If the constraints are affine (bilinear) in the<br />
decision variables , then the problem in an affine (bilinear) SOS programming problem.<br />
Finally, recall that SOS programming problems can be translated to SDPs and there are<br />
specialized s<strong>of</strong>tware packages for this translation, namely SOSTOOLS (only for affine SOS<br />
programs) [51] and YALMIP (for both affine and bilinear SOS constraints) [41].<br />
2.3 Generalized S-procedure and Positivstellensatz<br />
We now discuss algebraic sufficient conditions for set containment constraints used<br />
throughout this thesis.<br />
S-procedure is widely used in robust control theory to obtain<br />
linear matrix inequality based sufficient conditions for set containment questions involving<br />
quadratic function [13, 26]: for quadratic functions q 0 , q 1 , . . . , q m <strong>of</strong> the form q i (x) =<br />
[x 1]Q i [x 1] T , for i = 0, . . . , m, with symmetric matrices Q i ∈ R (n+1)×(n+1) , does the set<br />
15
containment constraint<br />
{x ∈ R n : q 1 (x) ≥ 0, . . . , q m (x) ≥ 0} ⊆ {x ∈ R n : q 0 (x) ≥ 0} (2.5)<br />
hold A (possibly conservative) certificate for this containment is the existence <strong>of</strong> nonnegative<br />
real numbers such that Q 0 − τ 1 Q 1 − . . . − τ 2 Q 2 ≽ 0, which is a LMI. We now state a<br />
straightforward generalization <strong>of</strong> the S-procedure to the case where the quadratic functions<br />
are replaced by general scalar valued functions.<br />
Lemma 2.3.1. Given scalar valued functions g 0 , g 1 , · · · , g m : R n → R, if there exist positive<br />
semidefinite functions s 1 , · · · , s m such that<br />
g 0 (x) −<br />
then<br />
m∑<br />
s i (x)g i (x) ≥ 0 for all x ∈ R n , (2.6)<br />
i=1<br />
{x ∈ R n : g 1 (x), . . . , g m (x) ≥ 0} ⊆ {x ∈ R n : g 0 (x) ≥ 0} . (2.7)<br />
⊳<br />
Lemma 2.3.1 provides algebraic sufficient conditions (nonnegativity <strong>of</strong> the multipliers<br />
s i and (2.6)) for the set containment constraints (2.7). However, these algebraic conditions<br />
require verifying the global nonnegativity <strong>of</strong> certain scalar valued functions. In order to<br />
circumvent this difficulty, we now specialize Lemma 2.3.1 to the case where g 0 , g 1 , · · · , g m<br />
polynomials and replace nonnegativity conditions by SOS conditions that are suitable for<br />
numerical verification.<br />
Lemma 2.3.2 (Generalized S-procedure). Given g 0 , g 1 , · · · , g m<br />
∈ R[x], if there exist<br />
s 1 , · · · , s m ∈ Σ[x] such that<br />
then (2.7) holds.<br />
m∑<br />
g 0 − s i g i ∈ Σ[x], (2.8)<br />
i=1<br />
⊳<br />
16
The Positivstellensatz, a central theorem from real algebraic geometry, provides generalizations<br />
<strong>of</strong> Lemma 2.3.2. For its statement, a few definitions are needed.<br />
Definition 2.3.1. Given {g 1 , . . . , g t } ∈ R[x], the multiplicative monoid generated by g j ’s is<br />
the set <strong>of</strong> all finite products <strong>of</strong> g j ’s, including 1 (i.e. the empty product). It is denoted as<br />
M(g 1 , . . . , g t ). For completeness define M(∅) := 1.<br />
⊳<br />
Definition 2.3.2. Given {f 1 , . . . , f r } ∈ R[x], the cone generated by f i ’s is<br />
{<br />
}<br />
m∑<br />
P(f 1 , . . . , f r ) := s 0 + s i b i : m ∈ Z + , s i ∈ Σ[x], b i ∈ M(f 1 , . . . , f r ) .<br />
i=1<br />
⊳<br />
Definition 2.3.3. Given {h 1 , . . . , h u } ∈ R[x], the ideal generated by h k ’s is<br />
{∑ }<br />
I(h 1 , . . . , h u ) := hk p k : p k ∈ R[x] .<br />
⊳<br />
With these definitions, we can state the following theorem from [12, Theorem 4.2.2]:<br />
Theorem 2.3.1 (Positivstellensatz). Given polynomials {f 1 , . . . , f r }, {g 1 , . . . , g t }, and<br />
{h 1 , . . . , h u } in R[x], the following are equivalent:<br />
i. The set below is empty:<br />
⎧<br />
⎪⎨<br />
x ∈ R n :<br />
⎪⎩<br />
f 1 (x) ≥ 0, . . . , f r (x) ≥ 0,<br />
g 1 (x) ≠ 0, . . . , g t (x) ≠ 0,<br />
h 1 (x) = 0, . . . , h u (x) = 0<br />
⎫<br />
⎪⎬<br />
⎪⎭<br />
ii. There exist polynomials f ∈ P(f 1 , . . . , f r ), g ∈ M(g 1 , . . . , g t ), h ∈ I(h 1 , . . . , h u ) such<br />
that<br />
f + g 2 + h = 0.<br />
17
⊳<br />
Example 2.3.1. We now give a pro<strong>of</strong> <strong>of</strong> Lemma 2.3.2 to demonstrate the use <strong>of</strong> the Positivstellensatz.<br />
Note that the set containment constraint (2.7) holds if and only if<br />
{x | g 1 (x) ≥ 0, . . . , g m (x) ≥ 0, −g 0 (x) ≥ 0, g 0 (x) ≠ 0} = ∅. (2.9)<br />
Theorem 2.3.1, applied to (2.9), gives that (2.9) holds if and only if there exist s (·) ∈ Σ[x]<br />
and k ∈ Z + such that<br />
s + s 0 (−g 0 ) +<br />
m∑<br />
s i g i +<br />
i=1<br />
m∑<br />
s 0i (−g 0 )g i +<br />
i=1<br />
m∑ m∑<br />
m∏<br />
s ij g i g j + · · · + s 0...m (−g 0 ) g i + g0 2k = 0. (2.10)<br />
i=1 j=i<br />
Setting k = 1 and all s (·) = 0 except s 0 , s 01 , . . . , s 0m , we have a sufficient condition<br />
⎡<br />
⎤<br />
m∑<br />
− g 0<br />
⎣s 0 + s 0j g j − g 0<br />
⎦ = 0. (2.11)<br />
j=1<br />
i=1<br />
Since g 0 is not identically zero, (2.8) follows from (2.11) by renaming s 0i as s i .<br />
⊳<br />
Lemma 2.3.3. Let g ∈ R[x] be positive definite, h ∈ R[x], γ > 0, s 1 , s 2 ∈ Σ[x], l ∈ R[x] be<br />
positive definite and satisfy l(0) = 0. Suppose that<br />
− [(γ − g)s 1 + hs 2 + l] ∈ Σ[x] (2.12)<br />
holds. Then, it follows that<br />
{x ∈ R n : g(x) ≤ γ, x ≠ 0} ⊂ {x ∈ R n : h(x) < 0} (2.13)<br />
⊳.<br />
Pro<strong>of</strong>. Note that (2.13) holds if and only if<br />
{x ∈ R n : γ − g(x) ≥ 0, l(x) ≠ 0, h(x) ≥ 0} = ∅<br />
18
and the Positivstellensatz, applied to the last condition, yields (2.12) after appropriate<br />
simplifications [37].<br />
An independent pro<strong>of</strong> <strong>of</strong> Lemma 2.3.3 can be obtained without using the Positivstellensatz.<br />
Let x ∈ {x ∈ R n : g(x) ≤ γ, x ≠ 0}. Suppose that s 2 (x) = 0. Noting that g(x) ≤ γ<br />
and l(x) > 0 and substituting into (2.12) lead to a contradiction. Hence, s 2 (x) > 0 for all<br />
x ∈ Ω g,γ \{0}. Now, suppose (2.12) holds. Then, h(x)s 2 (x) ≤ −l(x). By the first part,<br />
h(x) ≤ −l(x)/s 2 (x). Consequently, h(x) < 0.<br />
<br />
2.3.1 Affine versus Bilinear Sufficient Conditions<br />
Let A 1 , A 2 : R[x] → R[x] be affine maps on R[x], f 1 ∈ R[x] and f 2 ∈ R[x], and consider<br />
the constraint<br />
{x ∈ R n : A 1 (f 1 (x)) ≥ 0} ⊆ {x ∈ R n : A 2 (f 2 (x)) ≥ 0} . (2.14)<br />
We will use the generalized S-procedure to handle mainly two types <strong>of</strong> questions:<br />
Question 1: Given f 1 ∈ R[x] and f 2 ∈ R[x], does (2.14) hold<br />
Question 2: Given f 2 ∈ R[x], does there exist f 1 ∈ R[x] such that (2.14) (possibly along<br />
with other constraints on f 1 ) holds<br />
Then, S-procedure based sufficient conditions are<br />
Sufficient condition for Question 1: Existence <strong>of</strong> s 2 ∈ Σ[x] such that<br />
A 2 (f 2 (x)) − A 1 (f 1 (x))s 2 (x) ∈ Σ[x]. (2.15)<br />
Sufficient condition for Question 2: Existence <strong>of</strong> s 2 ∈ Σ[x] and f 1 ∈ R[x] such that (possibly<br />
along with other constraints on f 1 )<br />
A 2 (f 2 (x)) − A 1 (f 1 (x))s 2 (x) ∈ Σ[x]. (2.16)<br />
19
The main difference between these conditions is that the constraint (2.16) is bilinear<br />
in its decision variables in f 1 and s 2 whereas the constraint (2.15) is affine in its decision<br />
variables in s 2 (f 1 is fixed in this case and does not contain any decision variables). Consequently,<br />
S-procedure based sufficient conditions lead to affine SDPs for Question 1 and<br />
bilinear SDPs for Question 2.<br />
2.4 Preliminary Remarks<br />
• Throughout this thesis, we only consider causal nonlinear input-output systems with<br />
no time delay represented by ordinary differential equations <strong>of</strong> the form<br />
ẋ(t) = f(x(t), w(t))<br />
z(t) = h(x(t)),<br />
(2.17)<br />
where x is the state vector, w denotes the input/disturbance, and z denotes the output.<br />
Occasionally, we will drop the time variable t in the notation and use<br />
ẋ = f(x, w)<br />
z = h(x)<br />
(2.18)<br />
in short.<br />
• In several places, a relationship between an algebraic condition on some real variables<br />
and input/output/state properties <strong>of</strong> a dynamical system is claimed. In nearly all <strong>of</strong><br />
these types <strong>of</strong> statements, we use same symbol for a particular real variable in the<br />
algebraic statement as well as the corresponding signal in the dynamical system. This<br />
could be a source <strong>of</strong> confusion, so care on the reader’s part is required.<br />
• In the examples in the following chapters, we occasionally do not provide exact numerical<br />
values used for computations. We either state the form or an approximation <strong>of</strong> the<br />
20
specific expression. Similarly, we do not provide certifying Lyapunov functions and<br />
multipliers. All missing data and results for all examples in this thesis are available<br />
at http://jagger.me.berkeley.edu/~utopcu/dissertation.<br />
2.5 Chapter Summary<br />
We presented a summary <strong>of</strong> background material focusing on the aspects needed for the<br />
development in the subsequent chapters. Mainly, this material coupled with Lyapunov-type<br />
theorems introduced in respective chapters will be used to translate system analysis questions<br />
to numerical optimization problems. A brief overview <strong>of</strong> semidefinite programming<br />
centered on distinguishing properties <strong>of</strong> affine and bilinear semidefinite programs was followed<br />
by the introduction <strong>of</strong> sum-<strong>of</strong>-squares polynomials and sum-<strong>of</strong>-squares programming.<br />
Finally, a generalization <strong>of</strong> the S-procedure, a central tool for handling set containment<br />
conditions, and its link to the Positivstellensatz were discussed.<br />
21
Chapter 3<br />
Simulation-Aided<br />
Region-<strong>of</strong>-Attraction <strong>Analysis</strong><br />
For a dynamical system, the region-<strong>of</strong>-attraction (ROA) <strong>of</strong> a locally asymptotically stable<br />
equilibrium point is an invariant set such that all trajectories emanating from points in<br />
this set converge to the equilibrium point. For nonlinear dynamics, research has focused<br />
on determining invariant subsets <strong>of</strong> the ROA because a closed form characterization <strong>of</strong> the<br />
exact ROA may be too complicated.<br />
This potential complication is avoided because <strong>of</strong><br />
several reasons: (i) Usually there is no systematic (numerical) procedure for computing the<br />
exact shape <strong>of</strong> the ROA. (ii) A complicated characterization <strong>of</strong> the ROA has limited value<br />
for post-analysis purposes. Among all other methods those based on Lyapunov functions<br />
are dominant in the literature [24, 27, 75, 21, 20, 46, 58, 59, 32, 63, 62]. These methods<br />
compute a Lyapunov function as a local stability certificate and sublevel sets <strong>of</strong> this Lyapunov<br />
function, in which the function decreases along the flow, provide invariant subsets <strong>of</strong><br />
the ROA.<br />
22
Using sum-<strong>of</strong>-squares (SOS) relaxations for polynomial nonnegativity [49], it is possible<br />
to search for polynomial Lyapunov functions for systems with polynomial and/or rational<br />
dynamics using semidefinite programming [46, 58, 59, 32]. However, the SOS relaxation for<br />
the problem <strong>of</strong> computing invariant subsets <strong>of</strong> the ROA leads to nonconvex optimization<br />
problems with bilinear matrix inequality constraints (namely bilinear SOS problems).<br />
By contrast, simulating a nonlinear system <strong>of</strong> moderate size, except those governed by<br />
stiff differential equations, is computationally efficient. Therefore, extensive simulation is a<br />
tool used in real applications. Although the information from simulations is inconclusive,<br />
i.e., cannot be used to find provably invariant subsets <strong>of</strong> the ROA, it provides insight into<br />
the system behavior. For example, if, using Lyapunov arguments, a function certifies that<br />
a set P is in the ROA, then that function must be positive and decreasing on any solution<br />
trajectory initiating in P.<br />
Using a finite number <strong>of</strong> points on finitely many convergent<br />
trajectories and a linear parametrization <strong>of</strong> the Lyapunov function V, those constraints<br />
become affine, and the feasible polytope (in V -coefficient space) is a convex outer bound<br />
on the set <strong>of</strong> coefficients <strong>of</strong> valid Lyapunov functions. It is intuitive that drawing samples<br />
from this set to seed the bilinear SDP solvers may improve the performance <strong>of</strong> the solvers.<br />
In fact, if there are a large number <strong>of</strong> simulation trajectories, samples from the set <strong>of</strong>ten are<br />
suitable Lyapunov functions (without further optimization) themselves. Information from<br />
simulations is also used in [52] and [55] for computing approximate Lyapunov functions.<br />
Effectively, we are relaxing the bilinear problem (using a very specific system theoretic<br />
interpretation <strong>of</strong> the problem) to a linear problem, and the true feasible set is a subset <strong>of</strong> the<br />
linear problem’s feasible set. By contrast, a general relaxation for bilinear problems based<br />
on replacing bilinear terms by new variables and nonconvex equality constraints by convex<br />
inequality constraints is proposed in [14]. This general relaxation increases the dimension<br />
23
<strong>of</strong> decision variable space, so that the true feasible set is a low dimensional manifold in<br />
the relaxed feasible space. There may be efficient manners to correctly “project” solutions<br />
to the relaxed problem into appropriate solutions to the original problem, but we do not<br />
pursue this.<br />
3.1 Characterization <strong>of</strong> Invariant Subsets <strong>of</strong> ROA and Bilinear<br />
SOS Problem<br />
Consider the autonomous nonlinear dynamical system<br />
ẋ(t) = f(x(t)), (3.1)<br />
where x(t) ∈ R n is the state vector and f : R n → R n is such that f(0) = 0, i.e., the origin<br />
is an equilibrium point <strong>of</strong> (3.1), and f is locally Lipschitz. Let φ(t; x 0 ) denote the solution<br />
to (3.1) at time t with the initial condition x(0) = x 0 . If the origin is asymptotically stable<br />
but not globally attractive, one <strong>of</strong>ten wants to know which trajectories converge to the<br />
origin as time approaches ∞. The region-<strong>of</strong>-attraction R 0 <strong>of</strong> the origin for the system (3.1)<br />
is<br />
R 0 :=<br />
{<br />
}<br />
x 0 ∈ R n : lim φ(t; x 0 ) = 0 .<br />
t→∞<br />
A modification <strong>of</strong> a similar result in [76] provides a characterization <strong>of</strong> invariant subsets <strong>of</strong><br />
the ROA in terms <strong>of</strong> sublevel sets <strong>of</strong> appropriately chosen Lyapunov functions.<br />
Lemma 3.1.1. Let γ ∈ R be positive. If there exists a C 1 function V<br />
: R n → R such that<br />
Ω V,γ is bounded, and (3.2)<br />
V (0) = 0 and V (x) > 0 for all x ∈ R n (3.3)<br />
Ω V,γ \ {0} ⊂ {x ∈ R n : ∇V (x)f(x) < 0} , (3.4)<br />
24
then for all x 0 ∈ Ω V,γ , the solution <strong>of</strong> (3.1) exists, satisfies φ(t; x 0 ) ∈ Ω V,γ for all t ≥ 0,<br />
and lim t→∞ φ(t; x 0 ) = 0, i.e., Ω V,γ is an invariant subset <strong>of</strong> R 0 .<br />
⊳<br />
In order to enlarge the computed invariant subset <strong>of</strong> the ROA by choice <strong>of</strong> V, we define<br />
a variable sized region Ω p,β , where p ∈ R[x] is a fixed positive definite convex polynomial,<br />
and maximize β while imposing the constraint Ω p,β ⊆ Ω V,γ along with the constraints<br />
(3.2)-(3.4). This can be written as<br />
β ∗ (V) :=<br />
max β<br />
β>0,V ∈V<br />
subject to<br />
(3.2) − (3.4), Ω p,β ⊆ Ω V,γ .<br />
(3.5)<br />
Here V denotes the set <strong>of</strong> candidate Lyapunov functions over which the maximum is computed,<br />
for example all continuously differentiable functions.<br />
Remarks 3.1.1. The objective in (3.5) is to compute a tight inner estimate <strong>of</strong> the robust<br />
ROA by choice <strong>of</strong> V . The β-sublevel sets <strong>of</strong> the fixed shape factor p are used to scalarize<br />
this objective. However, note that for two fixed functions V 1 and V 2 satisfying constraints<br />
<strong>of</strong> (3.5) with β equal to β 1 and β 2 respectively, even when 0 < β 1 < β 2 are largest positive<br />
scalars such that Ω p,β1 ⊆ Ω V1 and Ω p,β2 ⊆ Ω V2 hold, it is not necessarily true that Ω V1 ⊆ Ω V2 .<br />
In literature, mainly for quadratic Lyapunov functions, the volume <strong>of</strong> the computed sublevel<br />
sets is used as the objective function (e.g. in [20]). Our choice <strong>of</strong> using the shape factor<br />
p instead <strong>of</strong> the volume in the optimization objective is that it may be possible to choose p<br />
to reflect the intent <strong>of</strong> the analyst or utilize prior knowledge about the system (see section<br />
3.3.2 for the latter). ⊳<br />
The problem in (3.5) is an infinite dimensional problem. In order to make it amenable<br />
to numerical optimization (specifically SOS optimization), we restrict V to be all polynomials<br />
<strong>of</strong> some fixed degree and use SOS sufficient conditions for polynomial nonnegativity.<br />
25
Using simple generalizations <strong>of</strong> the S-procedure (Lemmas 2.3.2 and 2.3.3), we obtain sufficient<br />
conditions for set containment constraints. Specifically, let l 1 and l 2 be a positive<br />
definite polynomials (typically ɛx T x for some small real number ɛ). Then, since l 1 is radially<br />
unbounded, the constraint<br />
V − l 1 ∈ Σ[x] (3.6)<br />
and V (0) = 0 are sufficient conditions for (3.2) and (3.3). By Lemma 2.3.2, if s 1 ∈ Σ[x],<br />
then<br />
− [(β − p)s 1 + (V − γ)] ∈ Σ[x] (3.7)<br />
implies the set containment Ω p,β ⊆ Ω V,γ , and by Lemma 2.3.3, if s 2 , s 3 ∈ Σ[x], then<br />
− [(γ − V )s 2 + ∇V fs 3 + l 2 ] ∈ Σ[x] (3.8)<br />
is a sufficient condition for (3.4). Using these sufficient conditions, a lower bound on β ∗ (V)<br />
can be defined as<br />
βB ∗ (V, S) := max<br />
β<br />
V ∈V,β,s i ∈S i<br />
(3.6) − (3.8),<br />
subject to<br />
(3.9)<br />
V (0) = 0, and β > 0.<br />
Here, the sets V and S i are prescribed finite-dimensional subsets <strong>of</strong> polynomials. Although<br />
β ∗ B<br />
depends on these subspaces, it will not always be explicitly notated. Note that since<br />
the conditions (3.6)-(3.8) are only sufficient conditions,<br />
β ∗ B(V, S) ≤ β ∗ (V) ≤ β ∗ (C 1 ).<br />
The optimization problem in (3.9) is bilinear because <strong>of</strong> the product terms βs 1 in (3.7) and<br />
V s 2 and ∇V fs 3 in (3.8). However, the problem has more structure than a general BMI<br />
problem. If V is fixed, the problem becomes affine in S = {s 1 , s 2 , s 3 } and vice versa. In<br />
section 3.2, we will construct a convex outer bound on the set <strong>of</strong> feasible V and sample from<br />
26
this outer bound set to obtain candidate V ’s, and then solve (3.9) for S, holding V fixed.<br />
This “qualifies” V as a certificate, and then further BMI optimization (e.g. coordinate-wise<br />
affine search) is executed, using this (V, S) pair as an initial seed.<br />
3.2 Relaxation <strong>of</strong> the Bilinear SOS Problem Using Simulation<br />
Data<br />
The usefulness <strong>of</strong> simulation in understanding the ROA for a given system is undeniable.<br />
Faced with the task <strong>of</strong> performing a stability analysis (e.g., “for a given p, is Ω p,β contained<br />
in the ROA”), a pragmatic, fruitful and wise approach begins with a linearized analysis<br />
and at least a modest amount <strong>of</strong> simulation runs. Certainly, just one divergent trajectory<br />
starting in Ω p,β certifies that Ω p,β ⊄ R 0 . Conversely, a large collection <strong>of</strong> only convergent<br />
trajectories hints to the likelihood that indeed Ω p,β ⊂ R 0 . Suppose this latter condition is<br />
true, let C be the set <strong>of</strong> N conv trajectories c converging to the origin with initial conditions<br />
in Ω p,β . In the course <strong>of</strong> simulation runs, divergent trajectories d whose initial conditions<br />
are not in Ω p,β may also get discovered, so let the set <strong>of</strong> d’s be denoted by D and N div be<br />
the number <strong>of</strong> elements <strong>of</strong> D. Although C and D depend on β and the manner in which<br />
Ω p,β is sampled, this is not explicitly notated.<br />
With β and γ fixed, the set <strong>of</strong> Lyapunov functions which certify that Ω p,β ⊂ R 0 , using<br />
conditions (3.6)-(3.8), is simply<br />
{V ∈ R[x] : (3.6) − (3.8) hold for some s i ∈ Σ[x]} .<br />
Of course, this set could be empty, but it must be contained in the convex set<br />
{V ∈ R[x] : (3.10) holds},<br />
27
where<br />
∇V (c(t))f(c(t)) < 0,<br />
l 1 (c(t)) ≤ V (c(t)), and V (c(0)) ≤ γ,<br />
(3.10)<br />
γ + δ ≤ V (d(t)),<br />
for all c ∈ C, d ∈ D, and t ≥ 0, where δ is a fixed (small) positive constant. Informally,<br />
these conditions simply say that any V which verifies that Ω p,β ⊂ R 0 using the conditions<br />
(3.6)-(3.8) must, on the trajectories starting in Ω p,β , be decreasing and take on values<br />
between 0 and γ. Moreover, V must be greater than γ on divergent trajectories. In fact,<br />
with the exception <strong>of</strong> the strengthened lower bound on V (beyond mere positivity), the<br />
conditions in (3.10) are even necessary conditions for any V ∈ C 1 which verify Ω p,β ⊂ R 0<br />
using conditions (3.2)-(3.4).<br />
3.2.1 Affine Relaxation Using Simulation Data<br />
Let V be linearly parameterized as<br />
V := {V ∈ R[x] : V (x) = z(x) T α},<br />
where α ∈ R n b<br />
and z is n b -dimensional vector <strong>of</strong> polynomials in x. Given z(x), constraints<br />
in (3.10) can be viewed as constraints on α ∈ R n b<br />
yielding the convex set<br />
{α ∈ R n b<br />
: (3.10) holds for V = z(x) T α}.<br />
For each c ∈ C, d ∈ D, let T c and T d be finite subsets <strong>of</strong> the interval [0, ∞) including the<br />
origin. A polytopic outer bound for this set described by finitely many constraints is<br />
Y sim := {α ∈ R n b<br />
: (3.11) holds},<br />
28
where<br />
[∇z(c(τ c ))f(c(τ c ))] T α < 0,<br />
l 1 (c(τ c )) ≤ z(c(τ c )) T α, and z(c(0)) T α ≤ γ,<br />
(3.11)<br />
z(d(τ d )) T α ≥ γ + δ<br />
for all c ∈ C, τ c ∈ T c , d ∈ D, and τ d ∈ T d . Note that z(c(0)) T α ≤ γ in (3.11) provides<br />
necessary conditions for Ω p,β ⊆ Ω V,γ since c(0) ∈ Ω p,β for all c ∈ C. In practice, we<br />
replace the strict inequality in (3.11) by [∇z(c(τ c ))f(c(τ c ))] T α ≤ −l 3 (c(τ c )), where l 3 is a<br />
fixed, positive definite polynomial imposing a bound on the rate <strong>of</strong> decay <strong>of</strong> V along the<br />
trajectories. More compactly and for future reference, express the inequalities represented<br />
by (3.11) with this modification as Φ T α ≼ b.<br />
The constraint that ∇V f be negative on a sublevel set <strong>of</strong> V implies that ∇V f is negative<br />
on a neighborhood <strong>of</strong> the origin. While a large number <strong>of</strong> sample points from the trajectories<br />
will approximately enforce this, in some cases (e.g. exponentially stable linearization) it<br />
is easy to analytically express as a constraint on the low order terms <strong>of</strong> the polynomial<br />
Lyapunov function. For instance, assume V has a positive-definite quadratic part, and that<br />
separate eigenvalue analysis has established that the linearization <strong>of</strong> (3.1) at the origin, i.e.,<br />
ẋ = ∇f(0)x, is asymptotically stable. Define<br />
L(Q) := (∇f(0)) T Q + Q (∇f(0)) ,<br />
where Q T = Q ≻ 0 is such that x T Qx is the quadratic part <strong>of</strong> V . Then, if (3.8) holds, it<br />
must be that<br />
L(Q) ≺ 0. (3.12)<br />
Let<br />
Y lin := {α ∈ R n b<br />
: Q = Q T ≻ 0 and (3.12) holds}.<br />
29
It is well-known that Y lin is convex [15].<br />
Again, in practice, (3.12) is replaced by the<br />
condition L(Q) ≼ −ɛI, for some small real number ɛ. Furthermore, define<br />
Y SOS := {α ∈ R n b<br />
: (3.6) holds}.<br />
By [49], Y SOS is convex. Since Y sim , Y lin and Y SOS are convex,<br />
Y := Y sim ∩ Y lin ∩ Y SOS<br />
is a convex set in R n b. Equations (3.11) and (3.12) constitute a set <strong>of</strong> necessary conditions<br />
for (3.6)-(3.8); thus, we have<br />
Y ⊇ B := {α ∈ R n b<br />
: ∃s 2 , s 3 ∈ Σ[x] such that (3.6) − (3.8) hold}.<br />
This inclusion is depicted in Figure 3.1 for an illustrative example. Since (3.8) is not jointly<br />
convex in V and the multipliers the shaded region B in Figure 3.1, may not be a convex set<br />
and even may not be connected.<br />
Remarks 3.2.1. Note that although we do not state explicitly, the first constraint in (3.11)<br />
provides necessary conditions for the set containment Ω p,β ⊆ Ω V,γ since c i (0) ∈ Ω p,β for all<br />
c i ∈ C.<br />
⊳<br />
A point in Y can be computed solving an affine (feasibility) SDP with the constraints<br />
(3.6), (3.11) and (3.12). An arbitrary point in Y may or may not be in B. However, if we<br />
generate a collection A := {α (k) } N V −1<br />
k=0<br />
<strong>of</strong> N V points distributed approximately uniformly<br />
in Y, it may be that some <strong>of</strong> the points are in B. A point generation algorithm is discussed<br />
in the next section.<br />
A requirement <strong>of</strong> the algorithm is that Y is convex and bounded.<br />
Convexity has already been established. If Y determined by the constraints (3.6), (3.11)<br />
and (3.12) is not bounded, constraints on the absolute value <strong>of</strong> the components <strong>of</strong> α will be<br />
imposed.<br />
30
3.2.2 Hit-and-Run Random Point Generation Algorithm<br />
The following random point generation algorithm is used to generate points in Y [61]:<br />
Hit-and-run random point generation (H&R) algorithm: Given a positive integer N V<br />
and α (0) ∈ Y, set k = 0<br />
i. Generate a random vector ζ (k) = y (k) /‖y (k) ‖, where y (k) ∼ N (0, I nb ).<br />
ii. Compute the minimum value <strong>of</strong> t (k) ≤ 0 and the maximum value <strong>of</strong> t (k) ≥ 0 such that<br />
α (k) + t (k) ζ (k) ∈ Y and α (k) + t (k) ζ (k) ∈ Y.<br />
iii. Pick w (k) from a uniform distribution on [0, 1];<br />
iv. α (k+1) = w (k) (α (k) + t (k) ζ (k) ) + (1 − w (k) )(α (k) + t (k) ζ (k) ).<br />
v. k = k + 1.<br />
vi. If k = N V , return A. Otherwise, go to (i).<br />
⊳<br />
Figure 3.1 shows the initial point (big dot), an arbitrary point satisfying (3.6), (3.11) and<br />
(3.12), and the points generated by H&R algorithm (small dots) along with the random<br />
directions from step (i) (dashed line segments) for the illustrative example. Y is convex<br />
and points are generated as convex combinations <strong>of</strong> points in Y (step (iv)); therefore, the<br />
algorithm generates N V<br />
points in Y. Since Y is compact, for sufficiently large N V , points<br />
<strong>of</strong> A become approximately uniformly distributed in Y [61].<br />
Step (ii) <strong>of</strong> the H&R algorithm involves solving a linear SOS optimization problem and<br />
a LMI problem<br />
t (k)<br />
SOS :=<br />
max t≥0 t s.t. z(x) T (α (k) + tζ (k) ) ∈ Σ[x],<br />
t (k)<br />
lin := max t≥0 t s.t. L(Q (k) + tΛ (k) ) ≼ −ɛI,<br />
(3.13)<br />
31
Φ T 3 α = b 3<br />
Φ T 2 α = b 2<br />
α (0) α (5)<br />
α (1)<br />
α (3)<br />
Φ T 1<br />
= α<br />
b 1<br />
α (4) α (2)<br />
Φ T 4 α = b 4<br />
Figure 3.1. Sets Y and B and points generated by H&R<br />
column <strong>of</strong> Φ and j th component <strong>of</strong> b.<br />
algorithm. Φ j and b j denote the j th<br />
where Q (k) = Q (k)T ∈ R n×n and Λ (k) = Λ (k)T ∈ R n×n are such that x T Q (k) x and x T Λ (k) x<br />
are the quadratic parts <strong>of</strong> z(x) T α (k) and z(x) T ζ (k) , respectively. t (k) and t (k) are determined<br />
by the elementary expression<br />
t (k) := min<br />
t (k) := max<br />
{max j<br />
{<br />
}<br />
0, b j−Φ T j α(k)<br />
, t (k)<br />
Φ T j ζ(k)<br />
SOS , t(k) lin<br />
{ }<br />
{min j 0, b j−Φ T j α(k)<br />
, t (k)<br />
Φ T j ζ(k) SOS , t(k) lin<br />
}<br />
,<br />
}<br />
,<br />
where t (k)<br />
SOS<br />
and t(k)<br />
lin are computed replacing max t≥0 with min t≤0 in (3.13).<br />
3.2.3 Algorithms<br />
Since a feasible value <strong>of</strong> β is not known a priori, an iterative strategy to simulate and<br />
collect convergent and divergent trajectories is necessary. This process when coupled with<br />
the H&R algorithm constitutes the Lyapunov function candidate generation.<br />
32
Simulation and Lyapunov function generation (SimLF G) algorithm:<br />
Given positive<br />
definite convex p ∈ R[x], a vector <strong>of</strong> polynomials z(x) and constants β SIM , N conv , N V ,<br />
β shrink ∈ (0, 1), and empty sets C and D, set γ = 1, N more = N conv , N div = 0.<br />
i. Integrate (3.1) from N more initial conditions in the set {x ∈ R n : p(x) = β SIM }.<br />
ii. If there is no diverging trajectory, add the trajectories to C and go to (iii). Otherwise,<br />
add the divergent trajectories to D and the convergent trajectories to C, let N d<br />
denote the number <strong>of</strong> diverging trajectories found in the last run <strong>of</strong> (i) and set N div<br />
to N div + N d . Set β SIM to the minimum <strong>of</strong> β shrink β SIM and the minimum value <strong>of</strong> p<br />
along the diverging trajectories. Set N more to N more − N d , and go to (i).<br />
iii. At this point C has N conv elements. For each i = 1, . . . , N conv , let τ i satisfy c i (τ) ∈<br />
Ω p,βSIM for all τ ≥ τ i . Eliminate times in T i that are less than τ i .<br />
iv. Find a feasible point for (3.6), (3.11), and (3.12).<br />
If (3.6), (3.11), and (3.12) are<br />
infeasible, set β SIM = β shrink β SIM , and go to (iii). Otherwise, go to (v).<br />
v. Generate N V Lyapunov function candidates using H&R algorithm, and return β SIM<br />
and Lyapunov function candidates.<br />
⊳<br />
The suitability <strong>of</strong> a Lyapunov function candidate is assessed by solving two optimization<br />
problems. Both problems require bisection and each bisection step involves a linear<br />
SOS problem. Alternative linear formulations appear in section 3.6. These do not require<br />
bisection, but generally involve higher degree polynomial expressions.<br />
Optimization Problem 3.2.1. Given V ∈ R[x] (from SimLF G algorithm) and positive<br />
33
definite l 2 ∈ R[x], define<br />
γ ∗ L :=<br />
max γ subject to s 2 , s 3 ∈ Σ[x], γ > 0,<br />
γ>0,s 2 ∈S 2 ,s 3 ∈S 3<br />
− [(γ − V )s 2 + ∇V fs 3 + l 2 ] ∈ Σ[x].<br />
(3.14)<br />
⊳<br />
If Problem 3.2.1 is feasible, then γL ∗ > 0 and define<br />
Optimization Problem 3.2.2. Given V ∈ R[x], p ∈ R[x], and γ ∗ L , solve<br />
β ∗ L :=<br />
max β subject to s 1 ∈ Σ[x], β > 0,<br />
β>0,s 1 ∈S 1<br />
− [(β − p)s 1 − (V − γ ∗ L )] ∈ Σ[x]. (3.15)<br />
⊳<br />
Although γ ∗ L and β∗ L depend on S 1, S 2 , and S 3 , this is not explicitly notated.<br />
Assuming Problem 3.2.1 is feasible, it is true that<br />
Ω p,β ∗<br />
L<br />
\{0} ⊆ Ω V,γ ∗<br />
L<br />
\{0} ⊂ {x ∈ R n : ∇V (x)f(x) < 0},<br />
so V certifies that Ω p,β ∗<br />
L<br />
⊂ R 0 . Solutions to Problems 3.2.1 and 3.2.2 provide a feasible<br />
point for the problem in (3.9). This feasible point can be further improved by solving the<br />
problem in (3.9) using iterative coordinate-wise affine optimization schemes, one <strong>of</strong> which<br />
is given next.<br />
Coordinate-wise optimization (CW Opt) algorithm: Given V ∈ R[x], positive definite<br />
l 1 , l 2 ∈ R[x], a constant ε iter , and maximum number <strong>of</strong> iterations N iter , set k = 0<br />
i. Solve Problems 3.2.1 and 3.2.2.<br />
ii. Given s 1 , s 2 , s 3 , and γL ∗ from step (i), set γ in (3.7)-(3.8) to γ∗ L<br />
, solve (3.9) for V and<br />
β, and set β ∗ L = β∗ B . 34
iii. If k = N iter or the increase in βL ∗ between successive applications <strong>of</strong> (ii) is less than<br />
ε iter , return V, γL ∗ , and β∗ L<br />
. Otherwise, set k to k + 1 and go to (i).<br />
⊳<br />
3.2.4 Discussion <strong>of</strong> Algorithms<br />
The algorithms (SimLF G, Problems 3.2.1 and 3.2.2, and CW Opt) yield lower bounds<br />
on β ∗ (C 1 ), as they produce a Lyapunov function which certifies that a particular value <strong>of</strong><br />
β satisfies Ω p,β ∈ R 0 . Upper bounds (i.e., values <strong>of</strong> β that are not certifiable) may also be<br />
obtained. More specifically, diverging trajectories found in the course <strong>of</strong> simulation runs<br />
provide upper bounds on β ∗ (C 1 ) while inconsistency <strong>of</strong> the constraints (3.6), (3.11), and<br />
(3.12) provide upper bounds on βB ∗ . A diverging trajectory with the initial condition x 0<br />
satisfying p(x 0 ) = β proves that Ω p,β cannot be a subset <strong>of</strong> the ROA, i.e., β ∗ (C 1 ) < β.<br />
Furthermore, restricting Lyapunov function candidates to<br />
V z := { z(x) T α : α ∈ R n }<br />
b<br />
has additional implications. Infeasibility <strong>of</strong> any <strong>of</strong> the constraints (3.6), (3.11), and (3.12)<br />
for some value <strong>of</strong> β (recall (3.11) implicitly depends on β) verifies<br />
β ∗ B (V z , S) ≤ β ∗ (V z ) < β,<br />
regardless <strong>of</strong> the subspaces constituting S (see section 3.4.1 for an example). Moreover,<br />
the gap between the value <strong>of</strong> β proven unachievable and what we actually certify, namely<br />
a lower bound to βB ∗ (V z, S) , can be used as a measure <strong>of</strong> suboptimality introduced due to<br />
the finiteness <strong>of</strong> the degree <strong>of</strong> the multipliers and the fact that the bilinear search and the<br />
coordinate-wise linear search are only local optimization schemes.<br />
35
Table 3.1. Parameters used in and results <strong>of</strong> SimLF G and CW Opt algorithms.<br />
∂(V ) = 2 ∂(V ) = 4 ∂(V ) = 6<br />
β SIM (initial) 4.0 4.0 4.0<br />
β shrink 0.95 0.95 0.95<br />
N V 50 50 50<br />
N conv 100 200 500<br />
N div 5 6 18<br />
β ∗ L<br />
β ∗ L<br />
(before CW Opt) 1.33 1.75 1.05<br />
(after CW Opt) 1.57 2.14 2.34<br />
3.3 Examples<br />
In the examples, l i (x) = 10 −6 x T x for i = 1, 2, 3.<br />
3.3.1 Van der Pol Dynamics<br />
The Van der Pol dynamics<br />
ẋ 1 = −x 2 ,<br />
ẋ 2 = x 1 + (x 2 1 − 1)x 2<br />
have a stable equilibrium point at the origin and an unstable limit cycle. The limit cycle is<br />
the boundary <strong>of</strong> the ROA. We applied SimLF G and CW Opt algorithms with p(x) = x T x.<br />
Parameters used in and results <strong>of</strong> these runs are shown in Table 3.1. Figure 3.2 shows values<br />
β ∗ L<br />
before and after applying CW Opt algorithm.<br />
Figure 3.3 shows computed invariant<br />
subsets <strong>of</strong> the ROA.<br />
We also performed further optimization initializing PENBMI with V ′ s generated by the<br />
36
50<br />
40<br />
30<br />
20<br />
10<br />
0<br />
1.2 1.4 1.6 1.8 2 2.2 2.4 2.6<br />
β<br />
50<br />
40<br />
30<br />
20<br />
10<br />
0<br />
1.2 1.4 1.6 1.8 2 2.2 2.4 2.6<br />
β<br />
50<br />
40<br />
30<br />
20<br />
10<br />
0<br />
1.2 1.4 1.6 1.8 2 2.2 2.4 2.6<br />
β<br />
Figure 3.2. Histograms <strong>of</strong> βL ∗ before CW Opt (black bars) and β∗ L<br />
after CW Opt (white bars)<br />
for ∂(V ) = 2 (top), 4 (middle), and 6 (bottom).<br />
37
3<br />
2<br />
1<br />
0<br />
−1<br />
−2<br />
−3<br />
−2 −1 0 1 2<br />
x 2<br />
x 1<br />
Figure 3.3. The invariant subsets <strong>of</strong> the ROA (dot: ∂(V ) = 2, dash: ∂(V ) = 4, and solid:<br />
∂(V ) = 6 (indistinguishable from the outermost curve for the limit cycle)).<br />
SimLF G algorithm. Practically, every seeded PENBMI run terminated with the same βB<br />
∗<br />
value which is the largest known (at least by us) value <strong>of</strong> β for which (3.9) is feasible with<br />
the prescribed families <strong>of</strong> Lyapunov functions and multipliers. In addition, we performed<br />
10 unseeded PENBMI runs for ∂(V ) = 4 and 6. Of these runs 90% and 50%, respectively,<br />
terminated successfully (with an optimal value <strong>of</strong> β equal to that from the seeded PENBMI<br />
runs).<br />
Moreover, unseeded PENBMI runs took longer computation times than seeded<br />
PENBMI runs. For comparison, seeded PENBMI runs took 3 − 8 and 11 − 24 seconds for<br />
∂(V ) = 4 and 6, respectively, on a desktop PC, whereas they took 50 − 250 and 1000 − 2500<br />
seconds, respectively, for unseeded PENBMI runs.<br />
Remarks 3.3.1. It is worth mentioning that the invariant subsets <strong>of</strong> the ROA certified by<br />
∂(V ) = 4 and 6 Lyapunov functions are larger than those in [58, 59]. A reason for this<br />
38
improvement is that we use higher degree multipliers in (3.9)-(3.8). This demonstrates the<br />
benefit <strong>of</strong> using higher degree multipliers when computationally tolerable.<br />
3.3.2 Examples from the Literature<br />
We present results obtained using the method from the previous section for the systems<br />
in (3.16)-(3.22). (E 1 )-(E 3 ) are from [20], (E 4 ) and (E 7 ) are from [75], and (E 5 ) and (E 6 ) are<br />
from [33] and [32], respectively. Since the dynamics in (E 1 )-(E 7 ) have no physical meaning<br />
and there is no p given, we applied SimLF G algorithm sequentially: Apply SimLF G<br />
algorithm with p(x) = x T x and N V = 1 for ∂(V ) = 2. Call the quadratic Lyapunov function<br />
obtained ˆV . Set p to ˆV and apply SimLF G algorithm with this p and N V = 1 for ∂(V ) = 4.<br />
For (E 5 )-(E 7 ), we further applied CW Opt algorithm with N iter = 10. Table 3.2 shows the<br />
ratio <strong>of</strong> the volume <strong>of</strong> the invariant subset <strong>of</strong> the ROA obtained using this procedure to<br />
that reported in the corresponding references. Empirical volumes <strong>of</strong> sublevel sets <strong>of</strong> V are<br />
computed by randomly sampling a hypercube containing the sublevel set. Values in Table<br />
3.2 are volumes normalized by π and 4π/3 for 2 and 3 dimensional problems, respectively.<br />
For (E 4 ), (E 6 ), and (E 7 ), we also empirically verified that the invariant subsets <strong>of</strong> the ROA<br />
reported in the corresponding references are contained in those computed by this sequential<br />
procedure. Figure 3.4 and Figure 3.5 show the invariant subsets <strong>of</strong> the ROA for (E 6 ) and<br />
(E 7 ) computed here and reported in [32] and [75], respectively.<br />
⎧<br />
⎪⎨<br />
(E 1 ) :<br />
⎪⎩<br />
ẋ 1 = x 2 ,<br />
ẋ 2 = −2x 1 − 3x 2 + x 2 1 x 2.<br />
(3.16)<br />
39
Table 3.2. Volume ratios for (E 1 )-(E 7 ).<br />
example volume ratio example volume ratio<br />
(E 1 ) 16.7/10.2 (E 2 ) 0.99/0.85<br />
(E 3 ) 37.2/23.5 (E 4 ) 1.00/0.28<br />
(E 5 ) 62.3 /7.3 (E 6 ) 35.0/15.3<br />
(E 7 ) 1.44/0.70<br />
⎧<br />
⎪⎨<br />
(E 2 ) :<br />
⎪⎩<br />
⎧<br />
⎪⎨<br />
(E 3 ) :<br />
ẋ 1 = x 2 ,<br />
(3.17)<br />
ẋ 2 = −2x 1 − x 2 + x 1 x 2 2 − x5 1 + x 1x 4 2 + x5 2 .<br />
ẋ 1 = x 2 ,<br />
ẋ 2 = x 3 ,<br />
(3.18)<br />
⎪⎩<br />
⎧<br />
⎪⎨<br />
(E 4 ) :<br />
ẋ 3 = −4x 1 − 3x 2 − 3x 3 + x 2 1 x 2 + x 2 1 x 3.<br />
ẋ 1 = −x 2 ,<br />
ẋ 2 = −x 3 ,<br />
(3.19)<br />
⎧<br />
⎪⎨<br />
(E 7 ) :<br />
⎪⎩<br />
⎪⎩ ẋ 3 = −0.915x 1 + (1 − 0.915x 2 1 )x 2 − x 3 .<br />
⎧<br />
ẋ 1 = x 2 + 2x 2 x 3 ,<br />
⎪⎨<br />
(E 5 ) : ẋ 2 = x 3 ,<br />
⎪⎩ ẋ 3 = −0.5x 1 − 2x 2 − x 3 .<br />
⎧<br />
ẋ 1 = −x 1 + x 2 x 2 3<br />
⎪⎨<br />
,<br />
(E 6 ) : ẋ 2 = −x 2 + x 1 x 2 ,<br />
⎪⎩ ẋ 3 = −x 3 .<br />
ẋ 1 = −0.42x 1 − 1.05x 2 − 2.3x 2 1 − 0.5x 1x 2 − x 3 1 ,<br />
ẋ 2 = 1.98x 1 + x 1 x 2 .<br />
(3.20)<br />
(3.21)<br />
(3.22)<br />
40
2<br />
x 3<br />
0<br />
−2<br />
−2<br />
0<br />
2<br />
−4 −2 0 2 4 6<br />
x 1<br />
x 2<br />
Figure 3.4. Invariant subset <strong>of</strong> the ROA for (E 6 ) reported in [32] (solid surface) and that computed<br />
using the sequential procedure from section 3.3.2 (dotted surface).<br />
1<br />
0.5<br />
0<br />
−0.5<br />
−1<br />
−1.5<br />
x 2<br />
−2<br />
−1.5 −1 −0.5 0 0.5 1 1.5 2<br />
x 1<br />
Figure 3.5. Invariant subset <strong>of</strong> the ROA for (E 7 ) reported in [75] (thick solid curve) and that<br />
computed using the sequential procedure from section 3.3.2 (Ω V,γ ∗) (thin solid curve), and system<br />
trajectories (dash-dot curves).<br />
41
3.3.3 Controlled Short Period Aircraft Dynamics<br />
Consider a cubic polynomial approximation <strong>of</strong> the short period pitch-axis model <strong>of</strong> an<br />
aircraft 1 ⎡<br />
ẋ p = ⎢<br />
⎣<br />
c 1 (x p )<br />
q 2 (x p )<br />
⎤ ⎡<br />
+<br />
⎥ ⎢<br />
⎦ ⎣<br />
l T b x p<br />
b 2<br />
0<br />
⎤<br />
u,<br />
⎥<br />
⎦<br />
x 1<br />
where x p = [x 1 x 2 x 3 ] T , x 1 , x 2 , and x 3 denote the pitch rate, the angle <strong>of</strong> attack, and<br />
the pitch angle, respectively, c 1 is a cubic polynomial, q 2 is a quadratic polynomial, l 12<br />
and l b are vectors in R 3 , b 2 ∈ R. The control input, elevator deflection, is determined by<br />
ẋ 4 = −0.864y 1 − 0.321y 2 and u = 2x 4 , where x 4 is the controller state and the plant output<br />
y = [x 1 x 3 ] T . Define x := [ x T ] T<br />
p x 4 . We applied SimLF G and CW Opt algorithms with<br />
p(x) = x T x, N V = 1, β SIM = 15.0, β shrink = 0.85. Results <strong>of</strong> these runs are shown in Table<br />
3.3. In summary, the following bound for β ∗ (C 1 ) has been established<br />
16.31 ≤ β ∗ (C 1 ) < 17.32.<br />
This example will be repeatedly used in the subsequent chapters to demonstrate proposed<br />
methods. Therefore, we also provide typical computation times for this example in Table<br />
3.3. Additionally, we applied SimLF G and CW Opt algorithms with N V = 50. Figure 3.6<br />
shows values β ∗ L<br />
before and after applying CW Opt algorithm.<br />
3.3.4 Pendubot Dynamics<br />
The pendubot is an underactuated two-link pendulum with torque action only on the<br />
first link. Third order polynomial approximation <strong>of</strong> the closed-loop dynamics (with a linear<br />
1 The vector field is obtained by setting δ 1 = 1.52 and δ 20 in the model with uncertainty in section 4.5.2.<br />
42
Table 3.3. Results <strong>of</strong> SimLF G and CW Opt algorithms. Upper bounds are established by<br />
a separate run <strong>of</strong> SimLF G algorithm with N conv = 3000. The upper bound for ∂(V ) = 4 is<br />
by a divergent trajectory whereas as the upper bound is by the infeasibility <strong>of</strong> (3.6), (3.11),<br />
and (3.12) for the given β value. Representative computation times are on 2.0 GHz desktop<br />
PC.<br />
∂(V ) = 2 ∂(V ) = 4<br />
N conv 300 500<br />
β ∗ L<br />
β ∗ L<br />
(before CW Opt) 8.26 9.82<br />
(after CW Opt) 9.34 16.31<br />
upper bound 13.79 17.32<br />
typical computation time 40 seconds 6 minutes<br />
50<br />
40<br />
30<br />
20<br />
10<br />
0<br />
5 10 15<br />
β<br />
50<br />
40<br />
30<br />
20<br />
10<br />
0<br />
5 10 15<br />
β<br />
Figure 3.6. Histograms <strong>of</strong> βL ∗ before CW Opt (black bars) and β∗ L<br />
after CW Opt (white bars)<br />
for ∂(V ) = 2 (top) and 4 (bottom).<br />
43
1.5<br />
1<br />
0.5<br />
x 3<br />
0<br />
−0.5<br />
−1<br />
−1.5<br />
−1.5 −1 −0.5 0 0.5 1 1.5<br />
x 1<br />
Figure 3.7. A slice <strong>of</strong> the invariant subset <strong>of</strong> the ROA (solid curve) and initial conditions<br />
(with x 2 = 0 and x 4 = 0) for diverging trajectories (dots).<br />
quadratic regulator to balance the pendubot about its upright position) is<br />
ẋ 1 = x 2 ,<br />
ẋ 2 = 782x 1 + 135x 2 + 689x 3 + 90x 4 ,<br />
ẋ 3 = x 4<br />
ẋ 4 = 279x 1 x 2 3 − 1425x 1 − 257x 2 + 273x 3 3 − 1249x 3 − 171x 4 .<br />
Here, x 1 and x 3 are angular positions <strong>of</strong> the first link and the second link (relative to the<br />
first link). We applied SimLF G algorithm sequentially as described in section 3.3.2 and<br />
CW Opt algorithm with 10 iterations and obtained β ∗ L = 1.69.<br />
Conversely, we found a<br />
diverging trajectory with the initial condition ¯x with p(¯x) = 1.95 proving that<br />
1.69 ≤ β ∗ (C 1 ) < 1.95.<br />
Figure 3.7 shows the x 2 = 0 and x 4 = 0 slice <strong>of</strong> the invariant subset <strong>of</strong> the ROA along with<br />
initial conditions (with x 2 = 0 and x 4 = 0) for some diverging trajectories.<br />
44
3.3.5 Closed-loop Dynamics with <strong>Nonlinear</strong> Observer Based Controller<br />
For the dynamics<br />
ẋ 1 = u,<br />
ẋ 2 = −x 1 + x 3 1 /6 − u,<br />
y = x 2 ,<br />
where x 1 and x 2 are the states, u is the control input and y is the output, an observer<br />
L with polynomial vector field ż = L(y, z) with ∂(L) = 3 and a control law in the form<br />
u = −145.9z 1 + 12.3z 2 , where z 1 and z 2 are the observer states, were computed in [57]. The<br />
application <strong>of</strong> SimLF G algorithm with ∂(V ) = 2 and p from [57] and CW Opt algorithm<br />
with N iter = 4 lead to β ∗ L = 0.32.<br />
We also applied CW Opt algorithm (initialized with<br />
the quadratic V found in the first application) with ∂(V ) = 4 and N iter = 6 and obtained<br />
β ∗ L<br />
= 0.52. Conversely, we found a diverging trajectory with the initial condition (¯x, ¯z)<br />
satisfying p(¯x, ¯z) = 0.54 proving that 0.52 ≤ β ∗ (C 1 ) < 0.54.<br />
3.4 Critique<br />
3.4.1 Sampling vs. Simulating<br />
A common question we get is “why simulate to get the sample points – just sample<br />
some region, and impose ∇V (x)f(x) < 0 there.” There are a few answers to this. Intuitively,<br />
even running a few simulations gives insight into the system behavior. Engineers commonly<br />
use simulation to assess rough measures <strong>of</strong> stability robustness and ROA. Moreover, as<br />
converse Lyapunov theorems [76] implicitly define a certifying Lyapunov function in terms<br />
<strong>of</strong> the flow, it makes sense to sample the flow when looking for a Lyapunov function <strong>of</strong> a<br />
specific form. Furthermore, we have the following observation demonstrating that merely<br />
45
sampling some region and imposing ∇V (x)f(x) < 0 there carries misleading information.<br />
Consider the Van der Pol dynamics with p(x) = x T x and let S β denote a finite sample <strong>of</strong><br />
Ω p,β . It can be shown that the set <strong>of</strong> quadratic positive definite functions V that satisfy<br />
S 1.8 \{0} ⊂ {x ∈ R n : ∇V (x)f(x) < 0} (3.23)<br />
is nonempty. In fact, for V (x) = 0.32x 2 1 − 0.25x 1x 2 + 0.31x 2 2 , (3.23) is satisfied (actually<br />
for all x ∈ Ω p,1.8 , ∇V (x)f(x) ≤ −l 3 (x)). This naively suggests to draw samples from the<br />
set <strong>of</strong> quadratic positive definite functions satisfying (3.23) in order to try to prove that<br />
Ω p,1.8 ⊂ R 0 . However, simulations reveal a contradicting fact: Using trajectories with initial<br />
conditions in S 1.8 for ∂(V ) = 2, i.e., with z(x) = [x 2 1 , x 1x 2 , x 2 2 ]T , constraints (3.6), (3.11)<br />
(with γ = 1), and (3.12) turn out to be infeasible. This verifies that no quadratic Lyapunov<br />
function can prove Ω p,1.8 ⊂ R 0 using conditions (3.6)-(3.8), with the additional constraint<br />
that ˙V (x) ≤ −10 −6 x T x on all trajectories starting in Ω p,1.8 .<br />
Recall though, that using<br />
quartic Lyapunov functions we know β ∗ (V z , S) ≥ 2.14. By these observations, we have the<br />
following series <strong>of</strong> inclusions for the subsets <strong>of</strong> the positive definite quadratic polynomials<br />
{V : V certifies Ω p,β ⊂ R 0 using (3.6) − (3.8)}<br />
⊂ {V : ∇V (c s (τ))f(c s (τ)) < 0 ∀τ, ∀s ∈ S β }<br />
⊂ {V : ∇V (s)f(s) < 0 ∀ s ∈ S β },<br />
where c s denotes the trajectory with the initial condition s ∈ S β . Therefore, merely sampling<br />
instead <strong>of</strong> using simulations leads to a larger outer set from which the samples for<br />
V are taken in step (v) <strong>of</strong> SimLF G algorithm and it is less likely to find a function that<br />
certifies that Ω p,β ⊂ R 0 .<br />
46
3.4.2 Extensions and Limitations<br />
The number <strong>of</strong> decision variables N decision and the size N SDP <strong>of</strong> the matrix in the SDP<br />
for checking existence <strong>of</strong> a SOS decomposition for a degree 2d polynomial in n variables<br />
grows polynomially with n if d is fixed and vice versa [49]. However, N SDP and N decision get<br />
practically intractable for the state-<strong>of</strong>-the-art SDP solvers even for moderate variables for<br />
fixed d. Moreover, using higher degree Lyapunov functions and/or higher degree multipliers<br />
as well as higher degree vector fields makes problems larger, and, in fact, the growth <strong>of</strong> the<br />
problem size with the simultaneous increase in n and d is exponential.<br />
Based on these<br />
limitations, we believe that systems with cubic vector field are likely to be in the scope<br />
<strong>of</strong> the analysis tools based on SOS programming up to 6-7 states with quartic Lyapunov<br />
functions and 15-16 states with quadratic Lyapunov functions. It is worth re-emphasizing<br />
that bilinearity introduces extra complexity as seen in sections 3.3.1 and 3.3.3.<br />
H&R, SimLF G and CW Opt algorithms become more efficient using parallel computing.<br />
Steps <strong>of</strong> the proposed analysis method can be summarized as follows: Given a set G<br />
<strong>of</strong> initial conditions<br />
i. Generate a random sample <strong>of</strong> initial conditions in G;<br />
ii. Simulate the system starting from these initial conditions;<br />
iii. Solve the convex feasibility problem with constraints (3.6), (3.11), and (3.12) (i.e.,<br />
find a feasible point in Y);<br />
iv. Sample Y for different Lyapunov function candidates;<br />
v. Assess these Lyapunov function candidates;<br />
47
vi. Further optimize initializing CW Opt from qualified Lyapunov function candidates<br />
drawn from Y.<br />
Note that H&R algorithm consequently steps (i) and (iv) can be trivially parallelized.<br />
Similarly, simulations from different initial conditions, evaluating appropriate functions on<br />
these simulation trajectories, and assessing different Lyapunov function candidates are steps<br />
suitable for parallel computing. Moreover, step (iii) is only a feasibility problem with a large<br />
number <strong>of</strong> LP constraints and a few small LMIs (as opposed to an optimization problem)<br />
and may be solved utilizing a parallel implementation <strong>of</strong> alternating projection method [8].<br />
Based on the state-<strong>of</strong>-the-art <strong>of</strong> the solvers for SDPs [9], the bottleneck in simulationaided<br />
ROA analysis is the size <strong>of</strong> a single SDP that needs to be solved on a single processor.<br />
An interesting complement for the simulation-aided methodology may be exploiting parallel<br />
solvers for SDPs. Although not mature enough, parallel algorithms for SDPs have been<br />
reported (see, for example, [36, 10] and references therein).<br />
3.5 Sanity Check: Does Linear Stability Imply Existence <strong>of</strong><br />
SOS Certificates<br />
Recall that any feasible solution for the optimization problem (3.9) with constraints<br />
V ∈ R[x], V (0) = 0, s 1 ∈ Σ[x], s 2 ∈ Σ[x], s 3 ∈ Σ[x], γ > 0, β > 0<br />
V − l 1 ∈ Σ[x]<br />
− [(β − p)s 1 + (V − γ)] ∈ Σ[x]<br />
− [(γ − V )s 2 + ∇V fs 3 + l 2 ] ∈ Σ[x],<br />
(3.24a)<br />
(3.24b)<br />
(3.24c)<br />
(3.24d)<br />
48
characterizes an invariant subset <strong>of</strong> the ROA. When l 1 and l 2 have positive definite quadratic<br />
parts, a necessary condition for these constraints is that the linearized dynamics is asymptotically<br />
stable (which, in fact, is equivalent to the local asymptotic stability <strong>of</strong> the nonlinear<br />
dynamics). In this section, we focus on systems governed by ordinary differential equations<br />
<strong>of</strong> the form<br />
ẋ = f(x) = Ax + f 2 (x) + f 3 (x), (3.25)<br />
where f 2 and f 3 are vectors <strong>of</strong> quadratic and cubic polynomials, respectively, and A ∈ R n×n ,<br />
and prove that asymptotic stability <strong>of</strong> the linearized dynamics is also sufficient for the<br />
feasibility <strong>of</strong> the constraints (3.24) (for sufficiently small γ > 0).<br />
This result supports<br />
the claim that the local nonlinear analysis proposed in this chapter extends the classical<br />
linearization based analysis for nonlinear systems which only assures the existence <strong>of</strong> an<br />
infinitesimally small ROA but does not quantify its size [76].<br />
To this end, we need the<br />
following preliminary results.<br />
Let z(x) be a vector <strong>of</strong> degree 2 monomials in x with no repetition and n z denote the<br />
length <strong>of</strong> z(x). Let L i ∈ R n×nz be such that L i z(x) = x i x.<br />
Fact 3.5.1. Let Q ∈ R n×n be symmetric, f(x) = x T Qx, and g(x) = c 1 x 2 1 + c 2x 2 2 +<br />
. . . + c n x 2 n. Then, f(x)g(x) can be written in the form f(x)g(x) = z(x) T Hz(x) =<br />
z(x) T [ c 1 L T 1 QL 1 + . . . + c n L T n QL n<br />
]<br />
z(x).<br />
⊳<br />
Pro<strong>of</strong>. f(x)g(x) = x T Qx(c 1 x 2 1 + . . . + c nx 2 n) = ∑ n<br />
i=1 c ix 2 i (xT Qx). Then, x 2 i (xT Qx) =<br />
(x i x) T Q(x i x) = z(x) T L T i QL iz(x), and the result follows. <br />
Lemma 3.5.1. Let Q ∈ R n×n be positive definite and define f(x) = x T Qx. Let c 1 , . . . , c n ><br />
0 and g(x) = c 1 x 2 1 + c 2x 2 2 + . . . + c nx 2 n. Then, f(x)g(x) can be decomposed as<br />
f(x)g(x) = z(x) T Hz(x),<br />
49
where H can be chosen positive definite.<br />
⊳<br />
Pro<strong>of</strong>. By Fact 3.5.1, there is a decomposition <strong>of</strong> f(x)g(x) in the form<br />
f(x)g(x) = z(x) T Hz(z) = z(x) T [ c 1 L T 1 QL 1 + . . . + c n L T n QL n<br />
]<br />
z(x).<br />
Let v ∈ R nz<br />
be nonzero. Then<br />
v T Hv<br />
= v T [ c 1 L T 1 QL 1 + . . . + c n L T ]<br />
n QL n v<br />
⎡<br />
⎤<br />
c 1 Q<br />
= v T L T . .. Lv,<br />
⎢<br />
⎥<br />
⎣<br />
⎦<br />
c n Q<br />
= v T Hv<br />
where L := [ L T 1 . . . LT n<br />
] T . Note that L has full column rank since every entry <strong>of</strong> z(x) can<br />
be written as x j x k for some 1 ≤ j, k ≤ n; consequently, H ≻ 0.<br />
<br />
Proposition 3.5.1. Let f be an n-vector <strong>of</strong> cubic polynomials in x satisfying f(0) = 0, and<br />
let P ≻ 0, R 1 ≻ 0, R 2 ≻ 0, p(x) := x T P x, l 1 (x) := x T R 1 x, and l 2 (x) := x T R 2 x. If there<br />
exists Q ≻ 0 such that A T Q + QA ≺ 0, then the constraints in (3.24) are feasible.<br />
⊳<br />
Pro<strong>of</strong>. The pro<strong>of</strong> is constructive. Let z(x) be as defined above, ˜Q ≻ 0 satisfy A T ˜Q + ˜QA ≼<br />
−2R 2 and ˜Q ≽ R 1 (such ˜Q can be obtained by properly scaling Q). Let ɛ := λ min (R 2 ),<br />
V (x) := x T ˜Qx, and H ≻ 0 be such that (x T x)V (x) = z(x) T Hz(x) (which exists by Lemma<br />
3.5.1). Let M 2 ∈ R n×nz and symmetric M 3 ∈ R nz×nz satisfy<br />
∇V f 2 (x)<br />
∇V f 3 (x)<br />
= x T M 2 z(x)<br />
= z(x) T M 3 z(x).<br />
50
Define<br />
s 1 (x)<br />
:=<br />
λmax( ˜Q)<br />
λ min (P )<br />
c 2 := λmax(M 3+ 1 2ɛ M T 2 M 2)<br />
λ min (H)<br />
s 2 (x)<br />
:= c 2 x T x<br />
γ := ɛ<br />
2c 2<br />
β<br />
:= γ<br />
2s 1<br />
s 3 (x) := 1<br />
Clearly, s 1 ∈ Σ[x], s 2 ∈ Σ[x], and s 3 ∈ Σ[x]. Note that V (x)−l 1 (x) = x T (˜(Q)−R 1 )x ∈ Σ[x],<br />
since ˜Q − R 1 ≽ 0.<br />
b 1 (x) := − [(γ − V )s 2 + ∇V fs 3 + l 2 ]<br />
⎡ ⎤T ⎡<br />
⎤ ⎡<br />
x<br />
−γc 2 I − R 2 − (A T ˜Q + ˜QA) −M2 /2<br />
= ⎢ ⎥ ⎢<br />
⎥ ⎢<br />
⎣ ⎦ ⎣<br />
⎦ ⎣<br />
z(x)<br />
−M2 T /2<br />
c 2H − M 3<br />
} {{ }<br />
B 1<br />
⎡<br />
⎤<br />
B 1 ≽<br />
⎢<br />
⎣<br />
ɛ<br />
2 I −M 2/2<br />
−M T 2 /2 cH − M 3<br />
⎥<br />
⎦ ≽ 0<br />
x<br />
z(x)<br />
⎤<br />
⎥<br />
⎦<br />
by the Schur’s complement formula. Consequently, b 1 (x) ∈ Σ[x]. Finally,<br />
− [(β − p)s 1 + (V − γ)] =<br />
⎡<br />
⎢<br />
⎣<br />
1<br />
x<br />
⎤<br />
⎥<br />
⎦<br />
T ⎡<br />
⎢<br />
⎣<br />
−βs 1 + γ 0<br />
⎤ ⎡<br />
⎥ ⎢<br />
⎦ ⎣<br />
0 s 1 P − ˜Q<br />
} {{ }<br />
B 2<br />
1<br />
x<br />
⎤<br />
⎥<br />
⎦ ,<br />
where B 2 ≽ 0 and consequently b 2 ∈ Σ[x].<br />
<br />
51
3.6 Appendix<br />
Problems 3.2.1 and 3.2.2 compute lower bounds on the largest value <strong>of</strong> γ and β such<br />
that, for given V and p, Ω V,γ \{0} ⊂ {x ∈ R n : ∇V (x)f(x) < 0} and Ω p,β ⊂ Ω V,γ . We<br />
propose alternative formulations, that do not require line search, to compute similar lower<br />
bounds. Labeled γa ∗ and βa, ∗ these are generally different than γL ∗ and β∗ L<br />
. For h, g ∈ R[x]<br />
and a positive integer d, define<br />
Optimization Problem 3.6.1.<br />
Optimization Problem 3.6.2.<br />
µ o (h, g) := inf x≠0 h(x) subject to<br />
g(x) = 0,<br />
µ ∗ (h, g, d) := sup µ subject to<br />
µ>0,r∈R[x]<br />
(h − µ) ( x 2d<br />
1 + · · · + ) x2d n − gr ∈ Σ[x].<br />
Lemma 3.6.1. µ ∗ (h, g, d) ≤ µ o (h, g).<br />
⊳<br />
Pro<strong>of</strong>. If there is no µ ∈ R for which there exists r ∈ R[x] such that (h −<br />
µ) ( x 2d<br />
1 + · · · + ) x2d n − gr ∈ Σ[x] holds, then µ ∗ (h, g, d) = ∞. Therefore, let µ ∈ R and<br />
assume that there exists such that (h − µ) ( x 2d<br />
1 + · · · + ) x2d n − gr ∈ Σ[x] holds. Then,<br />
µ ≤ h(x) whenever g(x) = 0. Consequently, µ ∗ (h, g, d) ≤ h(x) whenever g(x) = 0 and<br />
µ ∗ (h, g, d) ≤ µ o (h, g). <br />
Lemma 3.6.1. Let g, h : R n → R be continuous, h be positive definite, g(0) = 0, and<br />
g(x) < 0 for all nonzero x ∈ O, a neighborhood <strong>of</strong> the origin. Define γ o := µ o (h, g).<br />
Then, the connected component <strong>of</strong> {x ∈ R n<br />
: h(x) < γ o } containing the origin is a subset<br />
<strong>of</strong> {x ∈ R n : g(x) < 0} ∪ {0}. ⊳<br />
52
Pro<strong>of</strong>. Suppose not and let x ≠ 0 be in the connected component <strong>of</strong> {x ∈ R n : h(x) < γ o }<br />
containing the origin but g(x) ≥ 0. Then, there exists a continuous function ϑ : [0, 1] → R n<br />
such that ϑ(0) = 0, ϑ(1) = x, and h(ϑ(t)) < γ o for all t ∈ [0, 1].<br />
Since g(0) = 0 and<br />
g(x) < 0 for all nonzero x ∈ O, there exists 0 < ɛ < 1 such that g(ϑ(ɛ)) < 0. Since x<br />
is not in {x ∈ R n<br />
: g(x) < 0} , g(ϑ(1)) ≥ 0. Since g and ϑ are continuous, there exists<br />
t ∗ ∈ (0, 1] such that g(ϑ(t ∗ )) = 0, which implies h(ϑ(t ∗ )) ≥ γ o . This contradiction leads to<br />
x ∈ {x ∈ R n : g(x) < 0}. <br />
Corollary 3.6.1. Let V ∈ R[x] be a positive definite C 1 function and satisfy (3.12) and<br />
V (0) = 0. Then, for all γ such that 0 < γ < µ o (V, ∇V f), the connected component <strong>of</strong> Ω V,γ<br />
containing the origin is an invariant subset <strong>of</strong> the ROA.<br />
⊳<br />
Pro<strong>of</strong>. Since the quadratic part <strong>of</strong> V is a Lyapunov function for the linearized system,<br />
there exists a neighborhood O <strong>of</strong> the origin such that ∇V (x)f(x) < 0 for all nonzero<br />
x ∈ O. By Lemma (3.6.1), the connected component <strong>of</strong> Ω V,γ containing the origin, a subset<br />
<strong>of</strong> the connected component <strong>of</strong> {x ∈ R n : V (x) < µ o (V, ∇V f)} containing the origin,<br />
is contained in {x ∈ R n<br />
: ∇V (x)f(x) < 0} ∪ {0}. Corollary (3.6.1) follows from regular<br />
Lyapunov arguments [76].<br />
<br />
Corollary 3.6.2. For some positive integer d 1 , define γ ∗ a := µ ∗ (V, ∇V f, d 1 ). Then, if<br />
γ < γ ∗ a for some positive integer d 1 , then the connected component <strong>of</strong> Ω V,γ containing the<br />
origin is an invariant subset <strong>of</strong> the ROA.<br />
⊳<br />
Pro<strong>of</strong>. By Lemma 3.6.1 we have γ < γ ∗ a ≤ µ o (V, ∇V f). Then, result follows from Corollary<br />
(3.6.1). <br />
Corollary 3.6.3. Let 0 < γ < γ ∗ a, d 2 be a positive integer, V, p ∈ R[x] be positive definite<br />
53
and p be convex. Define β ∗ a := µ ∗ (p, V − γ, d 2 ). Then for any β < β ∗ a, Ω p,β ⊂ Ω V,γ and<br />
Ω p,β ⊂ R 0 .<br />
⊳<br />
3.7 Chapter Summary<br />
We proposed a method for computing invariant subsets <strong>of</strong> the region-<strong>of</strong>-attraction for<br />
asymptotically stable equilibrium points <strong>of</strong> dynamical systems with polynomial vector fields.<br />
We used polynomial Lyapunov functions as local stability certificates whose certain sublevel<br />
sets are invariant subsets <strong>of</strong> the region-<strong>of</strong>-attraction. Similar to many local analysis<br />
problems, this is a nonconvex problem. Furthermore, its sum-<strong>of</strong>-squares relaxation leads to<br />
a bilinear optimization problem. We developed a method utilizing information from simulations<br />
for easily generating Lyapunov function candidates. For a given Lyapunov function<br />
candidate, checking its feasibility and assessing the size <strong>of</strong> the associated invariant subset<br />
are affine sum-<strong>of</strong>-squares optimization problems. Solutions to these problems provide invariant<br />
subsets <strong>of</strong> the region-<strong>of</strong>-attraction directly and/or they can further be used as seeds for<br />
local bilinear search schemes or iterative coordinate-wise affine search schemes for improved<br />
performance <strong>of</strong> these schemes. We reported promising results in all these directions.<br />
54
Chapter 4<br />
<strong>Local</strong> Stability <strong>Analysis</strong> for<br />
Uncertain <strong>Nonlinear</strong> <strong>Systems</strong><br />
We consider the problem <strong>of</strong> computing invariant subsets <strong>of</strong> the region-<strong>of</strong>-attraction<br />
(ROA) for uncertain systems with polynomial nominal vector fields and local polynomial<br />
uncertainty description. The literature on ROA analysis for systems with uncertain dynamics<br />
includes a generalization <strong>of</strong> Zubov’s method [17] and an iterative algorithm that<br />
asymptotically gives the robust ROA for with time-varying perturbations [45]. <strong>Systems</strong> with<br />
parametric uncertainties are considered in [18, 57, 74]. The focus in [18] is on computing<br />
the largest sublevel set <strong>of</strong> a given Lyapunov function that can be certified to be an invariant<br />
subset <strong>of</strong> the ROA. [57, 74] propose parameter-dependent Lyapunov functions which<br />
lead to potentially less conservative results (compared to parameter-independent Lyapunov<br />
functions) at the expense <strong>of</strong> increased computational complexity. Similar to other problems<br />
in local analysis <strong>of</strong> dynamical systems based on Lyapunov arguments and S-procedure and<br />
55
SOS relaxations [63, 58, 59, 57, 73, 60], our formulation leads to optimization problems<br />
with bilinear matrix inequality (BMI) constraints.<br />
The uncertainty description detailed in section 4.1 contains two types: uncertain components<br />
in the vector field that obey local polynomial bounds and/or uncertain parameters<br />
appearing affinely and multiplying polynomial terms. Using this description, we develop<br />
a bilinear SDP to compute robustly invariant subsets <strong>of</strong> the ROA. The number <strong>of</strong> BMIs<br />
(and consequently the number <strong>of</strong> variables) in this problem increases exponentially with<br />
the sum <strong>of</strong> the number <strong>of</strong> components <strong>of</strong> the vector field containing uncertainty with polynomial<br />
bounds and the number <strong>of</strong> uncertain parameters. One (suboptimal) way to deal<br />
with this difficulty is first to compute a Lyapunov function for a particular system (imposing<br />
extra robustness constraints) and then determine the largest sublevel set in which the<br />
computed Lyapunov function serves as a local stability certificate for the whole family <strong>of</strong><br />
systems. Once a Lyapunov function is determined for the system in the first step, second<br />
step involves solving smaller decoupled linear SDPs. Therefore, this sequential procedure is<br />
well suited for parallel computation leading to relatively efficient numerical implementation.<br />
Moreover, the method proposed in chapter 3 (see also [73, 70]), which uses simulation to aid<br />
in the nonconvex search for Lyapunov functions, extends easily to the robust ROA analysis<br />
using simulation data for finitely many systems from the family <strong>of</strong> possible systems (e.g.<br />
systems corresponding to the vertices <strong>of</strong> the uncertainty polytope). For the examples in<br />
this chapter, we implement this generalization <strong>of</strong> the simulation-aided ROA analysis method<br />
along with the sequential suboptimal solutions technique.<br />
56
4.1 Setup and Motivation<br />
We now introduce the uncertainty description used in the rest <strong>of</strong> the chapter and explain<br />
its usefulness in ROA analysis based on computing Lyapunov functions using SOS<br />
programming. Consider the system governed by<br />
ẋ(t) = f(x(t)) = f 0 (x(t)) + ϕ(x(t)) + ψ(x(t)), (4.1)<br />
where f 0 , ϕ, ψ : R n → R n are locally Lipschitz. Assume that f 0 is known, ϕ ∈ ∆ ϕ , and<br />
ψ ∈ ∆ ψ , where<br />
∆ ϕ := {ϕ : ϕ l (x) ≼ ϕ(x) ≼ ϕ u (x) ∀x ∈ G},<br />
∆ ψ := {ψ : ψ(x) = Ψ(x)α ∀x ∈ G, α l ≼ α ≼ α u }.<br />
Here, G is a given subset <strong>of</strong> R n containing the origin, ϕ l and ϕ u are n dimensional vectors<br />
<strong>of</strong> known polynomials satisfying<br />
ϕ l (x) ≼ 0 ≼ ϕ u (x) for all x ∈ G,<br />
α, α l , α u ∈ R N , and Ψ is a matrix <strong>of</strong> known polynomials. Let ϕ i , ϕ l,i , ϕ u,i , α i , α l,i , and α u,i<br />
denote i-th entry <strong>of</strong> ϕ, ϕ l , ϕ u , α, α l , and α u , respectively. Define 1<br />
∆ := ∆ ϕ + ∆ ψ .<br />
We assume that f 0 (0) = 0, ϕ(0) = 0 for all ϕ ∈ ∆ ϕ (,i.e., ϕ l (0) = 0, and ϕ u (0) = 0), and<br />
ψ(0) = 0 for all ψ ∈ ∆ ψ (,i.e., Ψ(0) = 0), which assures that all systems in (4.1) have a<br />
common equilibrium point at the origin. In order to be able to use SOS programming, we<br />
restrict our attention to the case where f 0 , ϕ l , ϕ u , and Ψ have only polynomial entries and<br />
G is defined as<br />
G := {x ∈ R n<br />
: g(x) ≽ 0, g i ∈ R[x], i = 1, . . . , m}.<br />
1 For subsets X 1 and X 2 <strong>of</strong> a vector space X , X 1 + X 2 := {x 1 + x 2 : x 1 ∈ X 1, x 2 ∈ X 2}.<br />
57
Note that entries <strong>of</strong> ϕ do not have to be polynomial but have to satisfy local polynomial<br />
bounds.<br />
Motivation for this kind <strong>of</strong> system description stems from the following sources:<br />
i. Perturbations as in (4.1) may be due to modeling errors, aging, disturbances and<br />
uncertainties due to environment which may be present in any realistic problem.<br />
Prior knowledge about the system may provide local bounds on the entries <strong>of</strong> ϕ<br />
and/or bounds for the parametric uncertainties α. Moreover, uncertainties that do<br />
not change system order can always be represented as in (4.1) (see p.339 in [38]).<br />
ii. <strong>Analysis</strong> <strong>of</strong> dynamical systems using SOS programming is <strong>of</strong>ten limited to systems<br />
with polynomial or rational vector field. In [47], a procedure for re-casting non-rational<br />
vector fields into rational ones at the expense <strong>of</strong> increasing the state dimension is<br />
proposed. Another way to deal with non-polynomial vector fields is to approximate<br />
the vector field by its Taylor series expansion about the origin. For practical purposes<br />
only finite number <strong>of</strong> terms can be used. Finite-term approximations are relatively<br />
accurate in a restricted region containing the origin. However, they are not exact. On<br />
the other hand, it may be possible to represent terms, for which the error between<br />
the exact vector field and its finite-term approximation obey local polynomial bounds,<br />
using ϕ in (4.1).<br />
iii. SOS programming can be used to analyze systems with polynomial vector fields. The<br />
number N decision <strong>of</strong> decision variables<br />
⎡⎛<br />
⎞<br />
N decision = 1 n + d<br />
⎢⎜<br />
2 ⎣⎝<br />
d<br />
⎟<br />
⎠<br />
2<br />
⎛<br />
+ ⎜<br />
⎝<br />
n + d<br />
d<br />
⎞⎤<br />
⎛<br />
⎟<br />
⎠⎥<br />
⎦ − ⎜<br />
⎝<br />
n + 2d<br />
d<br />
⎞<br />
⎟<br />
⎠<br />
58
Table 4.1. N SDP (left columns) and N decision (right columns) for different values <strong>of</strong> n and<br />
2d.<br />
2d<br />
n 4 6 8 10<br />
2 6 6 10 27 15 75 21 165<br />
5 21 105 56 1134 126 6714 252 2e4<br />
9 55 825 220 1e4 715 2e5 ⋆ ⋆<br />
14 120 4200 680 ⋆ ⋆ ⋆ ⋆ ⋆<br />
16 153 6936 ⋆ ⋆ ⋆ ⋆ ⋆ ⋆<br />
and the size N SDP<br />
N SDP =<br />
⎛<br />
⎜<br />
⎝<br />
n + d<br />
<strong>of</strong> the matrix in the SDP for checking existence <strong>of</strong> a SOS decomposition for a degree<br />
2d polynomial in n variables grows polynomially with n if d is fixed and vice versa<br />
[49]. However, N SDP and N decision get practically intractable for the state-<strong>of</strong>-the-art<br />
SDP solvers even for moderate values <strong>of</strong> n for fixed d (see Table 2, where solid lines<br />
in the table represent a fuzzy boundary between tractable and intractable SDPs).<br />
Moreover, using higher degree Lyapunov functions and/or higher degree multipliers<br />
(used in the sufficient conditions for certain set containment constraints in section<br />
4.2) as well as higher degree vector fields increases the problem size, and, in fact, the<br />
growth <strong>of</strong> the problem size with the simultaneous increase in n and d is exponential.<br />
Therefore, in order to be able to use SOS programming, one may have to simplify the<br />
dynamics by truncating higher degree terms in the vector field. In this case, ϕ l and<br />
ϕ u provide local bounds on the truncated terms. This is discussed further at the end<br />
<strong>of</strong> section (4.2). It is also worth mentioning that bilinearity, a common feature <strong>of</strong> the<br />
optimization problems for local analysis using Lyapunov arguments (see section 4.2),<br />
d<br />
⎞<br />
⎟<br />
⎠<br />
59
introduces extra complexity [35] and therefore a further necessity for simplifying the<br />
system dynamics.<br />
In summary, representation in (4.1) and definitions <strong>of</strong> ∆ ϕ and ∆ ψ are motivated by uncertainties<br />
introduced due to incapabilities in modeling and/or analysis.<br />
4.2 Computation <strong>of</strong> Robustly Invariant Sets<br />
In this section, we will develop tools for computing invariant subsets <strong>of</strong> the robust ROA.<br />
The robust ROA is the intersection <strong>of</strong> the ROAs for all possible systems governed by (4.1)<br />
and formally defined, assuming that the origin is an asymptotically equilibrium point <strong>of</strong><br />
(4.1) for all δ ∈ ∆, as<br />
Definition 4.2.1. The robust ROA R r 0<br />
<strong>of</strong> the origin for systems governed by (4.1) is<br />
R r 0 := ⋂<br />
δ∈∆<br />
{x ∈ R n : lim<br />
t→∞<br />
φ(t; x 0 , δ) = 0},<br />
where φ(t; x 0 , δ) denotes the solution <strong>of</strong> (4.1) with the initial condition x 0 .<br />
⊳<br />
The robust ROA is an open and connected subset <strong>of</strong> R n containing the origin and<br />
is invariant under the flow <strong>of</strong> all possible systems described by (4.1) [45].<br />
We focus on<br />
computing invariant subsets <strong>of</strong> the robust ROA characterized by sublevel sets <strong>of</strong> appropriate<br />
Lyapunov functions. Since the uncertainty description for ϕ and ψ holds only for x ∈ G,<br />
we will also require the computed invariant set to be a subset <strong>of</strong> G. To this end, we modify<br />
Lemma 3.1.1 such that a stricter version <strong>of</strong> condition (3.4) holds for (4.1) for all δ ∈ ∆<br />
(,i.e., for all ϕ ∈ ∆ ϕ and ψ ∈ ∆ ψ ). Namely, for µ ≥ 0, we replace (3.4) by<br />
Ω V \ {0} ⊂ {x ∈ R n : ∇V (x)f(x) < −µV (x)} (4.2)<br />
60
Remarks 4.2.1. With nonzero µ, (4.2) not only assures that trajectories starting in Ω V stay<br />
in Ω V and converge to the origin but also imposes a bound on the rate <strong>of</strong> exponential decay <strong>of</strong><br />
V certifying the convergence and provides an implicit threshold for the level <strong>of</strong> disturbances<br />
that could drive the system out <strong>of</strong> Ω V . Therefore, one may consider the stability property<br />
implied by (4.2) with nonzero µ to be more desirable in practice. With this in mind, all<br />
subsequent derivations contain the µV term. The relaxed condition in equation (3.4) can be<br />
recovered by setting µ = 0.<br />
⊳<br />
Proposition 4.2.1. For µ ≥ 0, if there exists a continuously differentiable function<br />
V : R n → R such that, for all δ ∈ ∆,<br />
V (0) = 0 and V (x) > 0 for all x ≠ 0, (4.3)<br />
Ω V is bounded, (4.4)<br />
Ω V ⊆ G, and (4.5)<br />
Ω V \ {0} ⊂ {x ∈ R n : ∇V (x)(f 0 (x) + δ(x)) < −µV (x)} , (4.6)<br />
hold, then for all x ∈ Ω V and for all δ ∈ ∆, the solution <strong>of</strong> (4.1) exists, satisfies φ(t; x 0 , δ) ∈<br />
Ω V for all t ≥ 0, and lim t→∞ φ(t; x 0 , δ) = 0, i.e., Ω V is an invariant subset <strong>of</strong> R r 0 . ⊳<br />
Pro<strong>of</strong>. Proposition 4.2.1 follows from Lemma 3.1.1. Indeed, for any given system ẋ =<br />
f 0 (x) + δ(x), (4.6) assures that (4.2) is satisfied. Then, for any fixed δ ∈ ∆ and for all<br />
x ∈ Ω V , φ(t; x 0 , δ) exists and satisfies φ(t; x 0 , δ) ∈ Ω V for all t ≥ 0, lim t→∞ φ(t; x 0 , δ) = 0,<br />
and Ω V is an invariant subset <strong>of</strong> {x ∈ R n<br />
subset <strong>of</strong> R r 0 .<br />
: φ(t; x 0 , δ) → 0}. Therefore, Ω V is an invariant<br />
<br />
Remarks 4.2.2. Proposition 4.2.1 is conservative because Ω V<br />
is invariant for both timeinvariant<br />
and time-varying perturbations.<br />
In fact, the conclusion holds for time-varying<br />
61
δ(= ϕ + α) as long as ϕ l (x) ≼ ϕ(x, t) ≼ ϕ u (x) and α l ≼ α(t) ≼ α u for all x ∈ G and<br />
t ≥ 0. Recall that, in the uncertain linear system literature, e.g. [7], the notion <strong>of</strong> quadratic<br />
stability is similar, where a single quadratic Lyapunov function proves the stability <strong>of</strong> an<br />
entire family <strong>of</strong> uncertain linear systems.<br />
⊳<br />
Note that ∆ has infinitely many elements; therefore, there are infinitely many constraints<br />
in (4.6). Now, define<br />
E ϕ := {ϕ : ϕ i ∈ R[x] and ϕ i is equal to ϕ l,i or ϕ u,i },<br />
E ψ := {ψ : ψ(x) = Ψ(x)α, α i is equal to α l,i or α u,i },<br />
and<br />
E ∆ := E ϕ + E ψ .<br />
E ∆ , a finite subset <strong>of</strong> ∆, can be used to transform condition (4.6) to a finite set <strong>of</strong> constraints<br />
that are more suitable for numerical verification:<br />
Proposition 4.2.2. If<br />
Ω V \ {0} ⊆ {x ∈ R n : ∇V (x)(f 0 (x) + δ(x)) < −µV (x)} (4.7)<br />
holds for all δ ∈ E ∆ , then (4.6) holds for all δ ∈ ∆.<br />
⊳<br />
Pro<strong>of</strong>. Let ˜x ∈ Ω V<br />
be nonzero and δ ∈ ∆. Then, ˜x ∈ G by (4.5); therefore, there exist<br />
l 1 , · · · , l n , k 1 , . . . , k N (depending on ˜x) with 0 ≤ l i ≤ 1 and 0 ≤ k i ≤ 1 such that<br />
δ(˜x) = Lϕ l (˜x) + (I − L)ϕ u (˜x) + Ψ(˜x)(Kα l + (I − K)α u ),<br />
where L and K are diagonal with L ii = l i and K ii = k i . Hence, there exist nonnegative<br />
numbers ν δ (determined from l’s and k’s) for δ ∈ E ∆ with ∑ δ∈E ∆<br />
ν δ = 1 such that δ(˜x) =<br />
62
∑<br />
δ∈E ∆<br />
ν δ δ(˜x). Consequently, by<br />
∇V (˜x)(f 0 (˜x) + δ(˜x))<br />
= ∇V (˜x)(f 0 (˜x) + ∑ δ∈E ∆<br />
ν δ δ(˜x))<br />
= ∑ δ∈E ∆<br />
ν δ ∇V (˜x)(f 0 (˜x) + δ(˜x))<br />
< − ∑ δ∈E ∆<br />
ν δ µV (˜x)<br />
= −µV (˜x),<br />
the condition (4.6) holds.<br />
<br />
In order to enlarge the computed invariant subset <strong>of</strong> the robust ROA, we define a<br />
variable sized region Ω p,β = {x ∈ R n : p(x) ≤ β} , where p ∈ R[x] is a fixed, positive<br />
definite, convex polynomial, and maximize β while imposing constraints (4.3)-(4.4), (4.5),<br />
(4.7), and Ω p,β ⊆ Ω V . This can be written as an optimization problem.<br />
β ∗ (V) :=<br />
max<br />
V ∈V,β>0<br />
β subject to (4.8a)<br />
V (0) = 0 and V (x) > 0 for all x ≠ 0, Ω V is bounded, (4.8b)<br />
Ω p,β = {x ∈ R n : p(x) ≤ β} ⊆ Ω V , Ω V ⊆ G, and<br />
Ω V \ {0} ⊆ {x ∈ R n : ∇V (x)(f 0 (x) + δ(x)) < −µV (x)} ∀δ ∈ E ∆ .<br />
(4.8c)<br />
(4.8d)<br />
Here, V denotes the set <strong>of</strong> candidate Lyapunov functions over which the maximum is defined<br />
(e.g. V may be equal to C 1 ).<br />
In order to make the problem in (4.8) amenable to numerical optimization (specifically<br />
SOS programming), we restrict V to be a polynomial in x <strong>of</strong> fixed degree and use SOS<br />
sufficient condition for polynomial nonnegativity. Using Lemmas 2.3.2 and 2.3.3, we obtain<br />
sufficient conditions for set containment constraints. Specifically, let l 1 and l 2 be a positive<br />
definite polynomials (typically ɛx T x with some (small) real number ɛ). Then, since l 1 is<br />
radially unbounded, the constraint<br />
V − l 1 ∈ Σ[x] (4.9)<br />
63
and V (0) = 0 are sufficient conditions for the constraints in (4.8b). By Lemma 2.3.2, if<br />
s 1 ∈ Σ[x] and s 4k ∈ Σ[x] for k = 1, . . . , m, then<br />
− [(β − p)s 1 + (V − 1)] ∈ Σ[x] (4.10)<br />
g k − (1 − V )s 4k ∈ Σ[x], k = 1, . . . , m, (4.11)<br />
imply the first and second constraints in (4.8c), respectively. By Lemma 2.3.3, if s 2δ , s 3δ ∈<br />
Σ[x] for δ ∈ E ∆ , then<br />
− [(1 − V )s 2δ + (∇V (f 0 + δ) + µV )s 3δ + l 2 ] ∈ Σ[x] (4.12)<br />
is a sufficient condition for the feasibility <strong>of</strong> the constraint in (4.8d). Using these sufficient<br />
conditions, a lower bound on β ∗ (V) can be defined as an optimization:<br />
Proposition 4.2.3. Let β ∗ B<br />
be defined as<br />
β ∗ B (V poly, S) :=<br />
max β subject to<br />
V,β,s 1 ,s 2δ ,s 3δ ,s 4k<br />
(4.9) − (4.12),<br />
(4.13)<br />
s 1 ∈ Σ[x], s 2δ ∈ Σ[x], s 3δ ∈ Σ[x], s 4k ∈ Σ[x],<br />
V (0) = 0, V ∈ V poly , s 1 ∈ S 1 , s 2δ ∈ S 2δ , s 3δ ∈ S 3δ , s 4k ∈ S 4k , and β > 0. Here, V poly ⊂ V<br />
and S’s are prescribed finite-dimensional subsets <strong>of</strong> R[x]. Then, β ∗ B (V poly, S) ≤ β ∗ (V poly ). ⊳<br />
4.3 Implementation Issues<br />
The optimization problem in (4.13) provides a recipe to compute subsets <strong>of</strong> R n that are<br />
invariant under the flow <strong>of</strong> all possible systems described by (4.1). The number <strong>of</strong> constraints<br />
in (4.13) (consequently the number <strong>of</strong> decision variables since each new constraint includes<br />
new variables) increases exponentially with N and n − ¯n where ¯n is defined as the number<br />
64
<strong>of</strong> entries <strong>of</strong> the vectors ϕ l and ϕ u satisfying ϕ l (x) = ϕ u (x) = 0 for all x ∈ G. Namely,<br />
there are 2 (n−¯n)+N SOS conditions in (4.13) due to the constraint in (4.12). Revisiting the<br />
discussion in item (iii) at the end <strong>of</strong> section 4.1, we note that covering the high degree vector<br />
field with low degree uncertainty reduces the dimension <strong>of</strong> the SOS constraints but increases<br />
(exponentially, depending on n − ¯n) the number <strong>of</strong> constraints. Consequently, the utility <strong>of</strong><br />
this approach will depend on n − ¯n and is problem dependent. Example (3) in section 4.5.1<br />
illustrates this technique. This difficulty can be partially alleviated by accepting suboptimal<br />
solutions for (4.13) in a sequential manner:<br />
4.3.1 Sequential Suboptimal Solution Technique:<br />
• Fix a finite sample ∆ sample <strong>of</strong> ∆ (for example ∆ sample can be chosen as the center <strong>of</strong><br />
∆) and solve<br />
max β subject to<br />
V,β,s 1 ,s 2δ ,s 3δ ,s 4k<br />
V − l 1 ∈ Σ[x],<br />
− [(β − p)s 1 + (V − 1)] ∈ Σ[x]<br />
g k − (1 − V )s 4k ∈ Σ[x], k = 1, . . . , m,<br />
(4.14)<br />
− [(1 − V )s 2δ + (∇V (f 0 + δ) + µV )s 3δ + l 2 ] ∈ Σ[x] for δ ∈ ∆ sample ,<br />
s 1 ∈ Σ[x], s 2δ ∈ Σ[x], s 3δ ∈ Σ[x], s 4k ∈ Σ[x],<br />
V (0) = 0, V ∈ V poly , s 1 ∈ S 1 , s 2δ ∈ S 2δ , s 3δ ∈ S 3δ , s 4k ∈ S 4k , and β > 0. Let V sample<br />
denote the optimizing V in (4.14).<br />
65
• For each δ ∈ E ∆ , compute<br />
γ δ :=<br />
max γ subject to<br />
γ>0,s 2δ ∈S 2δ ,s 3δ ∈S 3δ<br />
−[(γ − V sample )s 2δ + (∇V sample (f 0 + δ) + µV sample )s 3δ + l 2 ] ∈ Σ[x],<br />
s 2δ ∈ Σ[x], s 3δ ∈ Σ[x],<br />
(4.15)<br />
and define<br />
γ subopt := min {γ δ : δ ∈ E ∆ } .<br />
At this point, Ω Vsample ,γ subopt<br />
is an invariant subset <strong>of</strong> the robust ROA.<br />
• Determine the largest sublevel set Ω p,β subopt <strong>of</strong> p contained in Ω Vsample ,γsubopt by solving<br />
β subopt := max<br />
s 1 ∈S 1 ,β<br />
β<br />
subject to<br />
−[(β − p)s 1 + V sample − γ subopt ] ∈ Σ[x]<br />
s 1 ∈ Σ[x].<br />
⊳<br />
While the sequential procedure sacrifices optimality (i.e., β subopt ≤ βB ∗ ), it has practical<br />
computational advantages. The constraints in (4.12) decouple in the problem (4.15). In<br />
fact, for each δ ∈ E ∆ , the problem in (4.15) contains only a single constraint from (4.12).<br />
Table 4.3.1 shows the number <strong>of</strong> decision variables in (4.13) (top entry in each cell) and the<br />
number <strong>of</strong> decision variables in (4.15) (bottom entry in each cell) for ∂(V ) = 2, ∂(s 2δ ) = 2,<br />
and ∂(s 3δ ) = 0. Despite a rapid increase in the number <strong>of</strong> decision variables in (4.13) with<br />
n − ¯n + N, the number <strong>of</strong> decision variables in (4.15) does not vary with n − ¯n + N. Of<br />
course, the number <strong>of</strong> decoupled small problems that have to be solved in the second step<br />
<strong>of</strong> the sequential procedure still grows exponentially with n − ¯n + N. However, problems in<br />
(4.15) can be solved independently for different δ ∈ E ∆ and therefore computations can be<br />
66
Table 4.2. Number <strong>of</strong> decision variables in (4.13) (top entry in each cell <strong>of</strong> the table) and the<br />
number <strong>of</strong> decision variables in (4.15) (bottom entry in each cell <strong>of</strong> the table) for ∂(V ) = 2,<br />
∂(s 2δ ) = 2, and ∂(s 3δ ) = 0.<br />
n<br />
❳❳ n − ¯n + N ❳ ❳ ❳❳<br />
2 6 10 14<br />
1<br />
24 458 2588 8718<br />
10 218 1266 4306<br />
3<br />
164 3510 20312 69002<br />
10 218 1266 4306<br />
5<br />
324<br />
10<br />
6998<br />
218<br />
40568<br />
1266<br />
137898<br />
4306<br />
trivially parallelized. Advantages <strong>of</strong> this decoupling may be better appreciated by noting<br />
that one <strong>of</strong> the main difficulties in solving large-scale SDPs is the memory requirements <strong>of</strong><br />
the interior-point type algorithms [9]. Consequently, it is possible to perform ROA analysis<br />
for systems with relatively reasonable number <strong>of</strong> states and/or uncertain parameters using<br />
the proposed suboptimal solution technique.<br />
4.4 Sanity Check: Does Robust Stability Imply Existence <strong>of</strong><br />
SOS Certificates<br />
The solution <strong>of</strong> the optimization problem in (4.13) provides a characterization for invariants<br />
subsets <strong>of</strong> the robust ROA. When the constraints in (4.13) are feasible with l 1<br />
and l 2 having positive definite quadratic parts, then the linearization <strong>of</strong> the system (4.1) is<br />
robustly stable with a common Lyapunov function. In this section, we show, for systems<br />
with cubic vector fields, that if the linearization is robustly stable with common quadratic<br />
Lyapunov function then there exist feasible solutions for the constraints in (4.13). For notational<br />
simplicity, the development is only carried out for the parametric uncertainty case.<br />
67
The result is straightforward to extend to the more general uncertainty description in (4.1)<br />
with φ l and φ u vectors <strong>of</strong> cubic polynomials.<br />
Let ∆ be a bounded polytope in R N and for α ∈ ∆ consider the uncertain nonlinear<br />
dynamics 2 ẋ = f 0 (x) + ∑ N<br />
i=1 α if i (x), (4.16)<br />
where f 0 , f 1 , . . . , f m are vectors <strong>of</strong> cubic polynomials.<br />
The vector field in (4.16) can be<br />
re-written in the form<br />
ẋ = f α (x) = A α x + f 2,α (x) + f 3,α (x), (4.17)<br />
where<br />
A α<br />
f 2,α<br />
:= ∇f 0 (0) + ∑ N<br />
i=1 α i∇f i (0)<br />
:= f 0,2 (x) + ∑ N<br />
i=1 α if i,2 (x)<br />
f 3,α<br />
:= f 0,3 (x) + ∑ N<br />
i=1 α if i,3 (x),<br />
and for i = 0, . . . , m, f i,2 and f i,3 are quadratic and cubic polynomial parts <strong>of</strong> f i .<br />
Proposition 4.4.1. Let f 0 , . . . , f N be cubic polynomials in x satisfying f 0 (0) = . . . =<br />
f N (0) = 0, P ≻ 0, R 1 ≻ 0, R 2 ≻ 0, p(x) = x T P x, l 1 (x) = x T R 1 x, and l 2 (x) = x T R 2 x. For<br />
δ ∈ ∆, let A δ be such that A δ x is the linear (in x) part <strong>of</strong> f 0 (x) + ∑ N<br />
i=1 δ if i (x). If there<br />
exists Q ≻ 0 satisfying A T δ Q + QA δ ≺ 0 for all δ ∈ E ∆ , then the constraints in ((4.13)) are<br />
feasible.<br />
⊳<br />
Pro<strong>of</strong>. Let z(x) be a vector <strong>of</strong> degree 2 monomials in x with no repetition and n z denote<br />
the length <strong>of</strong> z(x). Let ˜Q ≻ 0 satisfy A δ ˜Q + ˜QAδ ≼ −2R 2 , for all δ ∈ E ∆ , and ˜Q ≽ R 1<br />
(such ˜Q can be obtained by properly scaling Q).<br />
Let ɛ = λ min (R 2 ), V (x) := x T ˜Qx,<br />
and H be a positive definite Gram matrix for (x T x)V (x). Let M 2δ ∈ R n×nz , and M 3δ ∈<br />
2 Note that, unlike previous sections, ∆ denotes a polytope in R N .<br />
68
R nz×nz<br />
be such that x T M 2δ z(x), and z(x) T M 3δ z(x) are cubic and quartic (in x) parts <strong>of</strong><br />
∇V (f 0 (x) + ∑ N<br />
i=1 δ if i (x)), respectively. Define s 1 (x) = λ max ( ˜Q)/λ min (P ), s 2δ (x) = c 2δ x T x<br />
with c 2δ = λ max<br />
(<br />
M3δ + 1 2ɛ M T 2δ M 2δ)<br />
/λmin (H) , s 3δ (x) = 1, γ = min {ɛ/(2c 2δ ) : δ ∈ E ∆ } ,<br />
and β = γ/(2s 1 ). Then, V − l 1 and − [(β − p)s 1 + (V − γ)] are SOS since they are positive<br />
semidefinite quadratic polynomials. For δ ∈ E ∆ , b δ (x) = − [ (γ − V )s 2δ + ∂V<br />
∂x f δs 3δ + l 2<br />
]<br />
=<br />
[<br />
x T z(x) T ] B δ<br />
[<br />
x T z(x) T ] T , where<br />
B δ =<br />
⎡<br />
⎢<br />
⎣<br />
⎤<br />
−γc 2δ I − R 2 − (A T ˜Q δ<br />
+ ˜QA δ ) −M 2δ /2<br />
⎥<br />
⎦<br />
−M2δ T /2<br />
c 2δH − M 3δ<br />
is positive semidefinite and consequently b δ ∈ Σ[x].<br />
<br />
4.5 Examples<br />
In the following examples, p(x) = x T x (except for example (2) in section 4.5.1), l 1 (x) =<br />
10 −6 x T x and l 2 (x) = 10 −6 x T x. All computations use the generalization <strong>of</strong> the simulationbased<br />
ROA analysis method from chapter 3.<br />
4.5.1 Examples from the Literature<br />
(1) Consider the following system from [18]:<br />
ẋ 1 = x 2<br />
ẋ 2 = −x 2 + α(−x 1 + x 3 1 ),<br />
where α ∈ [1, 3] is a parametric uncertainty. We solved problem (4.13) with ∂(V ) = 2<br />
and ∂(V ) = 4 for µ = 0, 0.01, 0.05, 0.1, 0.15, and 0.2. Note that µ = 0.244 is an upper<br />
bound for the value <strong>of</strong> µ for which the problem in (4.13) can be feasible. 3 Figure 4.1<br />
3 An upper bound ¯µ for the value <strong>of</strong> µ, for which the problem in (4.13) can be feasible, can be established<br />
computing the maximum value <strong>of</strong> µ for which the linearized uncertain dynamics are quadratically stable.<br />
69
1.5<br />
1<br />
0.5<br />
x 2<br />
0<br />
−0.5<br />
−1<br />
−1.5<br />
−1.5 −1 −0.5 0 0.5 1 1.5<br />
x 1<br />
Figure 4.1. Invariant subsets <strong>of</strong> ROA reported in [18] (black curve) and those computed<br />
solving the problem in (4.13) with ∂(V ) = 2 (blue curve) and ∂(V ) = 4 (green curve) along<br />
with initial conditions (red stars) for some divergent trajectories <strong>of</strong> the system corresponding<br />
to α = 1.<br />
Table 4.3. Optimal values <strong>of</strong> β in the problem (4.13) with different values <strong>of</strong> µ and ∂(V ) = 2<br />
and 4.<br />
❍ ❍❍❍❍ µ<br />
∂(V ) ❍<br />
0 0.01 0.05 0.1 0.15 0.2<br />
2 0.623 0.603 0.494 0.404 0.277 0.137<br />
4 0.771 0.763 0.742 0.720 0.698 0.676<br />
shows the invariant subset <strong>of</strong> the robust ROA reported in [18] (black curve) and those<br />
computed here with ∂(V ) = 2 (blue curve) and ∂(V ) = 4 (green curve) for µ = 0 along<br />
with two points (red stars) that are initial conditions for divergent trajectories <strong>of</strong> the<br />
system corresponding to α = 1. Table 4.3 shows the optimal values <strong>of</strong> β in the problem<br />
(4.13) with ∂(V ) = 2 and 4 for different values <strong>of</strong> µ.<br />
Here, ¯µ is defined as<br />
where A δ := ∇ x(f 0 + δ)| x=0<br />
.<br />
¯µ := max µ subject to<br />
µ≥0,P =P T ≽0<br />
A T δ P + P A δ + µP ≺ 0, for all δ ∈ E ∆ ,<br />
70
Table 4.4. Optimal values <strong>of</strong> β in the problem (4.13) with different values <strong>of</strong> µ and ∂(V ) = 4<br />
and 6.<br />
❍ ❍❍❍❍ µ<br />
∂(V ) ❍<br />
0 0.01 0.05 0.1 0.2 0.5 0.75<br />
4 0.773 0.767 0.741 0.708 0.640 0.517 0.406<br />
6 0.826 0.820 0.803 0.787 0.750 0.651 0.573<br />
(2) Consider the system<br />
ẋ 1 = −x 2 + 0.2αx 2<br />
ẋ 2 = x 1 + (x 2 1 − 1)x 2,<br />
where α ∈ [−1, 1] [57]. For easy comparison with the results in [57], let p(x) = 0.378x 2 1 −<br />
0.274x 1 x 2 + 0.278x 2 2 and µ = 0. In [57], it was shown that Ω p,0.545 (with a single<br />
parameter independent quartic V ), Ω p,0.772 (with pointwise maximum <strong>of</strong> two parameter<br />
independent quartic V ’s), Ω p,0.600 (with a single parameter dependent quartic (in state)<br />
V ), Ω p,0.806 (with pointwise maximum <strong>of</strong> two parameter dependent quartic (in state)<br />
V ’s) are contained in the robust ROA. On the other hand, the solution <strong>of</strong> problem (4.13)<br />
with ∂(V ) = 4 and ∂(V ) = 6 certifies that Ω p,0.773 and Ω p,0.826 are subsets <strong>of</strong> the robust<br />
ROA, respectively. Figure 4.2 shows invariant subsets <strong>of</strong> the robust ROA computed<br />
using ∂(V ) = 4 (green curve) and ∂(V ) = 6 (blue curve) along with the unstable limit<br />
cycle (red curves) <strong>of</strong> the system corresponding to α = −1.0, −0.8, . . . , 0.8, 1.0. In order<br />
to demonstrate the effect <strong>of</strong> the parameter µ on the size <strong>of</strong> the invariant subsets <strong>of</strong><br />
the robust ROA verifiable solving the optimization problem in (4.13), the analysis is<br />
repeated with µ = 0.01, 0.05, 0.1., 0.2, 0.5, and 0.75. Note that µ = 0.769 is an upper<br />
bound for the value <strong>of</strong> µ for which the problem in (4.13) can be feasible. Table 4.4<br />
shows the optimal values <strong>of</strong> β in the problem (4.13) with ∂(V ) = 4 and 6 for different<br />
values <strong>of</strong> µ.<br />
71
3<br />
2<br />
1<br />
x 2<br />
0<br />
−1<br />
−2<br />
−3<br />
−2 −1 0 1 2<br />
x 1<br />
Figure 4.2. Invariant subsets <strong>of</strong> ROA with ∂(V ) = 4 (green curve) and ∂(V ) = 6 (blue<br />
curve) along with the unstable limit cycle (red curves) <strong>of</strong> the system corresponding to<br />
α = −1.0, −0.8, . . . , 0.8, 1.0.<br />
(3) Consider the system governed by<br />
⎡<br />
ẋ =<br />
⎢<br />
⎣<br />
−2x 1 + x 2 + x 3 1 + 1.58x3 2<br />
−x 1 − x 2 + 0.13x 3 2 + 0.66x2 1 x 2<br />
⎤<br />
⎥<br />
⎦ + ϕ(x), (4.18)<br />
where ϕ satisfies the bounds<br />
−0.76x 2 2 ≤ ϕ 1(x) ≤ 0.76x 2 2<br />
−0.19(x 2 1 + x2 2 ) ≤ ϕ 2(x) ≤ 0.19(x 2 1 + x2 2 )<br />
in the set G = {x ∈ R 2<br />
: g(x) = x T x ≤ 2.1}. Figure 4.3 shows invariant subsets <strong>of</strong><br />
the robust ROA computed with ∂(V ) = 2 (green) and ∂(V ) = 4 (blue) along with two<br />
points (red stars) that are initial conditions for divergent trajectories.<br />
4.5.2 Controlled Short Period Aircraft Dynamics<br />
We apply the robust ROA analysis for controlled short period aircraft dynamics with<br />
two parametric uncertainties. Let x 1 , x 2 , and x 3 denote the pitch rate, the angle <strong>of</strong> attack,<br />
72
1<br />
0.5<br />
x 2<br />
0<br />
−0.5<br />
−1<br />
−1.5 −1 −0.5 0 0.5 1 1.5<br />
x 1<br />
Figure 4.3. Invariant subsets <strong>of</strong> ROA with ∂(V ) = 2 (green) and ∂(V ) = 4 (blue) along<br />
with initial conditions (red stars) for divergent trajectories.<br />
and the pitch angle, respectively. Then, the uncertain dynamics are<br />
⎡<br />
⎤ ⎡<br />
c 01 (x p ) + δ 1 c 11 (x p ) + δ1 2 q 31(x p )<br />
l T b x p + b 11 + b 12 δ 1<br />
ẋ p =<br />
q 02 (x p ) + δ 1 l T 12<br />
⎢<br />
x p + δ 2 q 22 (x p )<br />
+<br />
b 21 + b 22 δ 2<br />
⎥ ⎢<br />
⎣<br />
⎦ ⎣<br />
0<br />
x 1<br />
⎤<br />
u, (4.19)<br />
⎥<br />
⎦<br />
where x p = [x 1 x 2 x 3 ] T , c 01 and c 11 are cubic polynomials, q 02 , q 22 , and q 31 are quadratic<br />
polynomials, l 12 and l b are vectors in R 3 , b 11 , b 12 , b 21 , and b 22 are real scalars, and u,<br />
the elevator deflection, is the control input (see the Appendix for the values <strong>of</strong> the missing<br />
parameters). δ 1 ∈ [0.99, 2.05] models variations in the center <strong>of</strong> gravity in the longitudinal<br />
direction and δ 2 ∈ [−0.1, 0.1] models variations in the mass. The control input is determined<br />
by ẋ 4 = −0.864y 1 + −0.321y 2 and u = 2x 4 , where x 4 is the controller state and the plant<br />
output y = [x 1 x 3 ] T . Define x := [ x T p<br />
x 4<br />
] T . The dependence <strong>of</strong> the vector field on δ1 is<br />
not affine (because <strong>of</strong> the δ 2 1<br />
term) and the method proposed in this chapter is not directly<br />
73
Table 4.5. Optimal value <strong>of</strong> β in the first step, β sample , and β subopt with µ for ∂(V ) = 2 and<br />
4.<br />
∂(V ) = 2 ∂(V ) = 4<br />
β sample 9.01 16.11<br />
β subopt 4.12 0.43<br />
applicable. Therefore, let us replace δ 2 1 by an artificial parameter δ 3 and cover the graph<br />
{<br />
(δ1 , δ 2 , δ 3 ) ∈ R 3 : δ 3 = δ 2 1, δ 1 ∈ [0.99, 2.05], δ 2 ∈ [−0.1, 0.1] }<br />
by the polytope 4<br />
{<br />
(δ1 , δ 2 , δ 3 ) ∈ R 3 : 3.04δ 1 − 2.03 ≤ δ 3 ≤ 3.04δ 1 − 2.31, δ 1 ∈ [0.99, 2.05], δ 2 ∈ [−0.1, 0.1] } .<br />
We implemented the sequential suboptimal solution technique with µ = 0 for ∂(V ) = 2 and<br />
∂(V ) = 4 using ∆ sample = (1.52, 0, 1.52 2 ) (in the first step). Table 4.5 shows optimal value<br />
<strong>of</strong> β in the first step, call β sample , and β subopt (computed in the final step). To further assess<br />
the suboptimality <strong>of</strong> the results, we performed 500 simulations for the uncertain system<br />
setting the uncertain parameters to the vertex values and found a diverging trajectory with<br />
the initial condition x 0 satisfying p(x 0 ) = 8.51 (for (δ 1 , δ 2 , δ 3 ) = (2.05, 0.1, 3.92). The gap<br />
between the value β subopt and the values β sample and p at the initial point <strong>of</strong> this divergent<br />
trajectory may be due to the finite dimensional parametrization for V, the issues mentioned<br />
in Remark 4.2.2, the fact that we only use sufficient conditions and/or suboptimality <strong>of</strong><br />
the sequential implementation used for this example. Extensions proposed in the following<br />
chapter aim at partially reducing this suboptimality.<br />
4 This covering polytope is obtained using the method proposed in section 5.2.<br />
74
4.6 Chapter Summary<br />
We proposed a method to compute provably invariant subsets <strong>of</strong> the region-<strong>of</strong>-attraction<br />
for the asymptotically stable equilibrium points <strong>of</strong> uncertain nonlinear dynamical systems.<br />
We considered polynomial dynamics with perturbations that either obey local polynomial<br />
bounds or are described by uncertain parameters multiplying polynomial terms in the vector<br />
field. This uncertainty description is motivated by both incapabilities in modeling, as well<br />
as bilinearity and dimension <strong>of</strong> the sum-<strong>of</strong>-squares programming problems whose solutions<br />
provide invariant subsets <strong>of</strong> the region-<strong>of</strong>-attraction. We demonstrated the method on three<br />
examples from the literature and a controlled short period aircraft dynamics example.<br />
As pointed out throughout the chapter, the methodology developed in this chapter is<br />
restrictive and conservative due to the following:<br />
• Affine dependence <strong>of</strong> the vector field on the uncertainty is assumed;<br />
• Only polytopic uncertainty sets are allowed;<br />
• A common Lyapunov function is to certify the stability <strong>of</strong> an entire family <strong>of</strong> systems.<br />
These restrictive assumptions/choices have been made mainly due to computational considerations<br />
and tools developed in this chapter provide the infrastructure for the extensions<br />
in the following chapter:<br />
• Extension to non-polytopic uncertainty sets and non-affine uncertainty dependence<br />
• Reduction <strong>of</strong> conservatism by a branch-and-bound type refinement procedure in the<br />
”uncertainty space” using tools <strong>of</strong> this chapter repeatedly for subsets <strong>of</strong> the uncertainty<br />
set.<br />
75
4.7 Appendix<br />
Parameters for the uncertain controlled short period aircraft dynamics:<br />
c 01 (x p ) = −0.24366x 3 2 + 0.082272x 1x 2 + 0.30492x 2 2 + 0.015426x 2x 3 − 3.1883x 1<br />
− 2.7258x 2 − 0.59781x 3<br />
l b = [0<br />
− 0.041136 0] T<br />
b 11 = 1.594150<br />
q 02 (x p ) = −0.054444x 2 2 + 0.10889x 2x 3 − 0.054444x 2 3 + 0.91136x 1 − 0.64516x 2 − 0.016621x 3<br />
b 21 = 0.0443215<br />
c 11 (x p ) = 0.30765x 3 2 + 0.099232x2 2 + 0.12404x 1 + 0.90912x 2 + 0.023258x 3<br />
b 12 = −0.06202<br />
l 12 = [0 0.00045754 0] T<br />
q 22 (x p ) = −0.054444x 2 2 + 0.10889x 2x 3 − 0.054444x 2 3 − 0.6445x 2 − 0.016621x 3<br />
b 22 = 0.044321<br />
76
Chapter 5<br />
Extensions <strong>of</strong> the Robust<br />
Region-<strong>of</strong>-Attraction <strong>Analysis</strong>:<br />
Refinements and Non-affine<br />
Uncertainty Dependence<br />
In this chapter, we extend the applicability <strong>of</strong> the method proposed in the previous<br />
chapter for estimating the robust ROA for uncertain nonlinear dynamical systems in two<br />
directions:<br />
i. The attention in chapter 4 was restricted to affine dependence on uncertain parameters.<br />
Here, we propose an extension to non-affine (in particular polynomial) parameter<br />
dependence based on replacing non-affine appearances <strong>of</strong> uncertain parameters in the<br />
vector field by artificial parameters (increasing the dimension <strong>of</strong> the uncertain param-<br />
77
eter space) and covering the graph <strong>of</strong> non-affine functions <strong>of</strong> uncertain parameters by<br />
bounded polytopes in the appended uncertain parameter space.<br />
ii. In chapter 4, we characterized invariant subsets <strong>of</strong> the robust ROA using parameterindependent<br />
Lyapunov functions, i.e., a common Lyapunov function is to certify the<br />
computed invariant subset <strong>of</strong> the ROA over the entire parameter uncertainty set.<br />
Similar to quadratic stability analysis [7], where a single quadratic Lyapunov function<br />
proves the stability <strong>of</strong> an entire family <strong>of</strong> uncertain linear systems, this characterization<br />
is conservative, i.e., it leads to conservative inner estimates <strong>of</strong> the robust ROA. In order<br />
to reduce the conservatism, we now propose a branch-and-bound type refinement procedure<br />
where the uncertainty set is partitioned and a different parameter-independent<br />
Lyapunov function is computed for each cell <strong>of</strong> the partition.<br />
The refinement procedure computes lower and upper bounds on a “measure” <strong>of</strong> the size<br />
<strong>of</strong> the computed invariant subset <strong>of</strong> the robust ROA (detailed in section 5.1) at each iteration.<br />
The gap between lower and upper bounds decreases as the partition gets finer and<br />
localizes the optimal value <strong>of</strong> this measure (which would be achieved if a different Lyapunov<br />
function could be computed for every singleton in the uncertainty set). As another remedy<br />
for the conservativeness <strong>of</strong> parameter-independent Lyapunov functions, polynomially<br />
parameter-dependent Lyapunov functions are proposed in [18, 57]. Although SOS optimization<br />
can be used with parameter-dependent Lyapunov functions, the ensuing optimization<br />
problem is harder than that for parameter-independent Lyapunov functions mainly because<br />
uncertain parameters are treated as new independent variables in addition to state variables.<br />
Moreover, choosing a polynomially parameter-dependent basis for the Lyapunov<br />
function may be less intuitive than the parameter-independent case. Motivated by these<br />
78
difficulties, we restrict our attention to parameter-independent Lyapunov functions and<br />
polytopic uncertainty sets (non-polytopic uncertainty sets can be handled by using a polytopic<br />
cover). This approach <strong>of</strong>fers two main advantages: (i) It is potentially more flexible<br />
than using parameter-dependent Lyapunov functions since it does not require an a priori<br />
parametrization <strong>of</strong> the Lyapunov function in the uncertain parameters. It simply reduces<br />
the conservatism by partitioning the uncertainty set (see section 5.4.1). (ii) It leads to optimization<br />
problems with smaller semidefiniteness constraints since uncertain parameters do<br />
not explicitly appear in the constraints (see section 5.1). Although the size <strong>of</strong> the semidefinite<br />
programming constraints does not increase with the number <strong>of</strong> uncertain parameters,<br />
their number does and the problem becomes challenging as the number <strong>of</strong> uncertain parameters<br />
increases. We partially alleviate this difficulty by accepting suboptimal solutions<br />
obtained using the sequential implementation from section 4.3. Recall that this implementation<br />
is suitable for parallel computing <strong>of</strong>fering a major advantage over approaches utilizing<br />
parameter-dependent Lyapunov functions.<br />
5.1 Setup and Estimation <strong>of</strong> the Robust ROA <strong>of</strong> <strong>Systems</strong><br />
with Affine Parametric Uncertainty<br />
Consider the system governed by<br />
ẋ(t) = f(x, δ) = f 0 (x(t)) +<br />
m m∑<br />
∑ pu<br />
δ i f i (x(t)) + g j (δ)f m+j (x(t)), (5.1)<br />
i=1<br />
j=1<br />
where f 0 , f 1 , . . . , f m , f m+1 . . . , f m+mpu<br />
: R n → R n are vector valued polynomial functions<br />
satisfying f 0 (0) = . . . = f m+mpu (0) = 0, and g 1 , . . . , g mpu<br />
are scalar valued continuous<br />
79
functions, and δ takes values in a bounded polytope ∆. 1 Note that for the case m pu = 0<br />
results from chapter 4 apply. Next, we review the results <strong>of</strong> chapter 4 for this special case<br />
as a basis for the extension <strong>of</strong> these results for the case where g 1 , . . . , g mpu<br />
are in R[δ].<br />
For a subset D ⊂ R m , define<br />
M D := ⋂<br />
{x ∈ R n : ∇V (x)f(x, δ) < 0} .<br />
δ∈D<br />
Proposition 5.1.1. If there exists a continuously differentiable function V : R n → R<br />
such that<br />
V (0) = 0 and V (x) > 0 for all x ≠ 0, (5.2)<br />
Ω V = {x ∈ R n : V (x) ≤ 1} is bounded, and (5.3)<br />
Ω V \{0} ⊂ M ∆ , (5.4)<br />
then for all x 0 ∈ Ω V<br />
and for all δ ∈ ∆, the solution <strong>of</strong> (5.1) ϕ(t; x 0 , δ) exists, satisfies<br />
ϕ(t; x 0 , δ) ∈ Ω V for all t ≥ 0, and lim t→∞ ϕ(t; x 0 , δ) = 0, i.e., Ω V is an invariant subset <strong>of</strong><br />
the robust ROA.<br />
⊳<br />
Proposition 5.1.2. For the vector field in (5.1) with m pu = 0, M ∆ = M E∆ .<br />
⊳<br />
Pro<strong>of</strong>. Proposition 5.1.2 follows from the more general result in Proposition 4.2.2.<br />
<br />
Similar to previous chapters, we introduce a fixed shape factor p (a positive definite,<br />
and convex polynomial) and maximize β sublevel set subject to conditions in Proposition<br />
1 Unlike the previous chapter, we only consider the parametric uncertainty case in this chapter and ∆ is<br />
a polytope in R m .<br />
80
5.1.1 and Ω p,β ⊆ Ω V :<br />
β opt<br />
∆<br />
(V) := max<br />
V ∈V,β>0<br />
β subject to (5.5a)<br />
V (0) = 0 and V (x) > 0 for all x ≠ 0,<br />
(5.5b)<br />
Ω V = {x ∈ R n : V (x) ≤ 1} is bounded, (5.5c)<br />
Ω p,β ⊆ Ω V ,<br />
Ω V \ {0} ⊂ M E∆ .<br />
(5.5d)<br />
(5.5e)<br />
Finally, using the generalized S-procedure and Positivstellensatz, we relax this problem into<br />
a SOS programming problem: Let l 1 and l 2 be positive definite polynomials (typically ɛx T x<br />
with some (small) real number ɛ). Let V poly ⊆ V, R[x], S 1 , S 2 , and S 3 be prescribed finitedimensional<br />
subsets <strong>of</strong> R[x], and denote S = (S 1 , S 2 , S 3 ). For a polytopic subset D <strong>of</strong> ∆,<br />
define β D (V poly , S) as<br />
β D (V poly , S) :=<br />
max<br />
V ∈V poly ,β,s 1 ∈S 1 ,s 2δ ∈S 2 ,s 3δ ∈S 3<br />
β subject to<br />
s 1 ∈ Σ[x], s 2δ ∈ Σ[x], s 3δ ∈ Σ[x], for all δ ∈ E D , (5.6a)<br />
β > 0, V (0) = 0, V ∈ V poly ,<br />
V − l 1 ∈ Σ[x],<br />
− [(β − p)s 1 + (V − 1)] ∈ Σ[x], and<br />
(5.6b)<br />
(5.6c)<br />
(5.6d)<br />
− [(1 − V )s 2δ + ∇V (f 0 + ∑ m<br />
i=1 δ if i )s 3δ + l 2 ] ∈ Σ[x], for all δ ∈ E D . (5.6e)<br />
The feasibility <strong>of</strong> the constraints is (5.6) is sufficient for the feasibility <strong>of</strong> the constraints<br />
in (5.5). Therefore, β ∆ (V poly , S) ≤ β opt<br />
∆<br />
(V). For simplicity, we will omit the dependence <strong>of</strong><br />
β ∆ on V poly and S and the dependence <strong>of</strong> β opt<br />
∆<br />
needed).<br />
on V in notation hereafter (unless explicitly<br />
The optimization in (5.6) is naturally converted to a bilinear semidefinite program<br />
81
(SDP), with 3 “types” <strong>of</strong> decision variables: the free parameters in V, the free parameters<br />
in the s polynomials, and the free parameters introduced by the SOS constraints.<br />
The<br />
SDP is bilinear in the free parameters in V and multipliers s, as evidenced by the product<br />
terms (e.g. V s 2δ , ∇V fs 3δ , etc). We have made significant pragmatic progress in obtaining<br />
high-quality solutions to (5.6), using simulation to first derive a convex outer-bound on<br />
the set <strong>of</strong> feasible V parameters (see chapter 3 and [70]) and then drawing samples from<br />
that set to initialize the nonconvex search from a viable starting point. Nevertheless, the<br />
nonconvexity <strong>of</strong> the feasible set is not to be taken lightly, and any numerical attempt to<br />
compute β D should be itself treated as a lower bound <strong>of</strong> β D . If the optimization yields a<br />
feasible point, the achieved objective value may be lower than the optimal objective value.<br />
For this reason, upper bounds on the optimal objective value are also useful.<br />
5.1.1 Upper Bounds<br />
Trajectories <strong>of</strong> (5.1) that do not converge to the origin provide upper bounds for β opt<br />
∆<br />
consequently for β ∆ . Let β nc be a value <strong>of</strong> p attained on such a non-convergent trajectory.<br />
Since every trajectory entering an invariant subset <strong>of</strong> the robust ROA has to converge to the<br />
origin, Ω p,β nc cannot be a subset <strong>of</strong> the robust ROA; hence, the inequality β ∆ ≤ β opt<br />
∆<br />
holds.<br />
and<br />
< βnc<br />
In order to establish another upper bound, fix β > 0. If there exists V ∈ V certifying<br />
that Ω p,β is in the robust ROA through (5.5b)-(5.5e), then V has to be (i) positive for all<br />
nonzero x ∈ R n , (ii) less than or equal to 1 on and decreasing along every trajectory <strong>of</strong> (5.1)<br />
starting in Ω p,β and converging to the origin. Therefore, if no V ∈ V satisfies properties (i)<br />
and (ii), then one can conclude that there is no V ∈ V satisfying (5.5b)-(5.5e) (i.e., certifying<br />
that Ω p,β is in the robust ROA through (5.5b)-(5.5e)). Consequently, such a value <strong>of</strong> β, call<br />
82
β lp , provides an upper bound on β opt<br />
∆<br />
(V). In the case V = {V : V<br />
(x) = αT z(x)}, where<br />
z(x) is a basis vector for V (e.g. a vector <strong>of</strong> polynomials) and α is a vector <strong>of</strong> real scalars<br />
(decision variables in V ), constraints on V become affine constraints on α. When these<br />
conditions are imposed for finitely many x (on convergent trajectories starting in Ω p,β ) and<br />
finitely many δ ∈ ∆, checking their feasibility yields a linear programming problem with<br />
the constraints on α<br />
0 < z(x) T α ≤ 1,<br />
)<br />
m∑<br />
α T ∇z(x)<br />
(f T 0 (x) + δ i f i (x) < 0.<br />
i=1<br />
(5.7)<br />
In practice, we replace strict inequalities in (5.7) by non-strict inequalities ɛ 1 x T x ≤ z(x) T α<br />
and α T ∇z(x) T f(x) ≤ −ɛ 2 x T x where ɛ 1 and ɛ 2 are two small positive constants. Effectively,<br />
the set <strong>of</strong> α satisfying (5.7) is an outer bound for the set <strong>of</strong> α such that V (x) = z(x) T α<br />
certifies that Ω p,β is in the robust ROA through (5.5b)-(5.5e). This outer bound becomes<br />
tighter as the number <strong>of</strong> constraints in (5.7) increases and/or additional necessary conditions,<br />
such as necessary conditions from divergent trajectories and due to the stability <strong>of</strong><br />
the linearized dynamics (this latter case would lead a convex feasibility problem instead<br />
<strong>of</strong> mere linear programming problem), are imposed. See section 3.4 and [70] for a related<br />
discussion.<br />
Remarks 5.1.1. For any subset <strong>of</strong> D <strong>of</strong> ∆, β D provides another upper bound for β ∆ .<br />
However, computing β D requires to solve a nonconvex optimization problem.<br />
practically, β D can not be used as a reliable upper bound.<br />
Therefore,<br />
⊳<br />
83
5.2 Polynomial Parametric Uncertainty<br />
We extend the development in section 5.1 to systems <strong>of</strong> the form (5.1) with m pu = 1<br />
and a polynomial g 1 . We later propose a generalization for m pu ≥ 1.<br />
Replacing g 1 (δ) by an artificial parameter φ, the dynamics in (5.1) can be written as<br />
m∑<br />
ẋ(t) = f 0 (x(t)) + δ i f i (x(t)) + φf m+1 (x(t)). (5.8)<br />
Our approach is based on covering the graph <strong>of</strong> g<br />
{<br />
(ζ, g(ζ)) ∈ R m+1 : ζ ∈ ∆ }<br />
i=1<br />
by a bounded polytope Γ ⊂ R m+1 . Then, the dependence <strong>of</strong> the vector field in (5.8) on<br />
the parameters (δ, φ) is affine and (δ, φ) takes values in the bounded polytope Γ. Therefore,<br />
results from section 5.1 are applicable for the system in (5.8) by replacing ∆ by Γ.<br />
A polytope covering the graph <strong>of</strong> g can be obtained by bounding g by an affine function<br />
a T l δ + b l from below and by another affine function a T u δ + b u from above over the set ∆ :<br />
⎧<br />
⎫<br />
ζ ∈ ∆,<br />
⎪⎨<br />
⎪⎬<br />
Γ(a l , a u , b l , b u ) := (ζ, ψ) ∈ R m+1 : a T l ζ + b l ≤ ψ .<br />
⎪⎩<br />
a T u ζ + b u ≥ ψ<br />
⎪⎭<br />
Here, a l , a u ∈ R m and b l , b u ∈ R. Then, the volume, Volume(Γ(a l , a u , b l , b u )), <strong>of</strong> the polytope<br />
Γ is a linear function <strong>of</strong> its arguments a l , a u , b l , and b u :<br />
Volume(Γ(a l , a u , b l , b u ))<br />
∫<br />
:= (a u − a l ) T ζ + (b u − b l ) dζ l . . . dζ m<br />
∆<br />
∫<br />
∫<br />
= (a u − a l ) T ζdζ + (b u − b l ) dζ.<br />
The polytope with smallest volume among such covering polytopes can be computed<br />
∆<br />
∆<br />
84
through the optimization<br />
Volume ∗ :=<br />
min<br />
a l ,a u,b l ,b u<br />
Volume(Γ(a l , a u , b l , b u )) subject to<br />
g(δ) − (a T l δ + b l) ≥ 0, ∀δ ∈ ∆,<br />
(5.9)<br />
g(δ) − (a T u δ + b u ) ≤ 0, ∀δ ∈ ∆.<br />
Using Lemma 2.3.1, an upper bound for Volume ∗ can be computed by a linear SOS optimization<br />
problem. To this end, let affine functions h i , i = 1, . . . , N, provide an inequality<br />
description for ∆, i.e.,<br />
∆ = {ζ ∈ R m : h i (ζ) ≥ 0, i = 1, . . . , N} .<br />
Proposition 5.2.1. The value <strong>of</strong> the optimization problem<br />
min<br />
a l ,a u,b l ,b u,σ ui ∈S ui ,σ li ∈S li<br />
Volume(Γ(a l , a u , b l , b u )) subject to<br />
− g(δ) + (a T u δ + b u ) − ∑ N<br />
i=1 σ ui(δ)h i (δ) ∈ Σ[δ], (5.10a)<br />
g(δ) − (a T l δ + b l) − ∑ N<br />
i=1 σ li(δ)h i (δ) ∈ Σ[δ], (5.10b)<br />
σ ui ∈ Σ[δ], σ li ∈ Σ[δ] , i = 1, . . . , N,<br />
is an upper bound for Volume ∗ . Here S’s are finite dimensional subsets <strong>of</strong> R[δ].<br />
⊳<br />
Remarks 5.2.1.<br />
i. Note that Volume(Γ(a l , a u , b l , b u )) = Volume(Γ(0, a u , 0, b u )) − Volume(Γ(0, a l , 0, b l )),<br />
and therefore the optimizing values <strong>of</strong> the variables a l , a u , b l , and b u in Proposition<br />
5.2.1 can equivalently be computed by two smaller optimization problems:<br />
min<br />
σ ui ∈S ui ,a u,b u<br />
Volume(Γ(0, a u , 0, b u )) subject to<br />
(5.10a) and σ ui ∈ Σ[δ], i = 1, . . . , N,<br />
85
and<br />
max<br />
σ li ∈S li ,a l ,b l<br />
Volume(Γ(0, a l , 0, b l )) subject to<br />
(5.10b) and σ li ∈ Σ[δ], i = 1, . . . , N.<br />
ii. Higher order relaxations for semialgebraic set containment based on Positivstellensatz<br />
[49] can be used for less conservative results at the expense <strong>of</strong> increased computational<br />
cost.<br />
⊳<br />
In case m pu ≥ 1, m + 1 dimensional polytopes Γ 1 , . . . , Γ mpu covering graphs <strong>of</strong><br />
g 1 , . . . , g mpu , respectively, can be determined using the procedure proposed in this section<br />
repeatedly. Then, a polytope covering the graph <strong>of</strong> (g 1 , . . . , g mpu ) can be constructed as the<br />
intersection<br />
˜Γ := ˜Γ i ∩ . . . ∩ ˜Γ mpu ,<br />
where, for i = 1, . . . , m pu ,<br />
˜Γ i := { (ζ, ψ) ∈ R m+mpu : (ζ, ψ i ) ∈ Γ i<br />
}<br />
.<br />
The following proposition characterizes the extreme points <strong>of</strong> ˜Γ. For a similar result see [4].<br />
Proposition 5.2.2. For i = 1, . . . , m pu , let a T li δ + b li and a T ui δ + b ui be affine functions<br />
bounding g i over ∆ from below and above, respectively. Then, the set <strong>of</strong> vertices <strong>of</strong> ˜Γ is<br />
E˜Γ<br />
:= ⋃ {<br />
(ζ, ψ1 , . . . , ψ mpu ) ∈ R m+mpu : ψ i = a T }<br />
αiζ + b αi , α ∈ {l, u}, i = 1, . . . , m pu .<br />
ζ∈E ∆<br />
⊳<br />
Pro<strong>of</strong>. For j = 0, 1, 2, let θ [j] =<br />
(<br />
ζ [j] , ψ [j]<br />
1 , . . . , ψ[j] m pu<br />
)<br />
∈ E˜Γ<br />
be such that θ [0] = λθ [1] + (1 −<br />
λ)θ [2] for some λ ∈ (0, 1). Since ζ [0] = λζ [1] + (1 − λ)ζ [2] , ζ [0] ∈ E ∆ , and ζ [1] , ζ [2] ∈ ∆, it<br />
follows that ζ [0] = ζ [1] = ζ [2] (by the definition <strong>of</strong> vertices (extreme points)) [54]. Now, fix<br />
86
an arbitrary i ∈ {1, . . . , m pu }. If ψ [0]<br />
i<br />
= a T li ζ[0] + b li , then<br />
ψ [0]<br />
i<br />
= a T li ζ[0] + b li = λψ [1]<br />
i<br />
+ (1 − λ)ψ [2]<br />
i<br />
.<br />
Since θ [1] , θ [2] ∈ ˜Γ, we have that ψ [1]<br />
i<br />
≥ a T li ζ[0] + b li and ψ [2]<br />
i<br />
≥ a T li ζ[0] + b li . If any <strong>of</strong> last two<br />
inequalities is strict, we reach a contradiction<br />
a T li ζ[0] + b li = ψ [0]<br />
i<br />
= λψ [1]<br />
i<br />
+ (1 − λ)ψ [2]<br />
i<br />
> a T li ζ[0] + b li ,<br />
since λ ∈ (0, 1). Hence, ψ [0]<br />
i<br />
= ψ [1]<br />
i<br />
= ψ [2]<br />
i<br />
. Same reasoning can be repeated when ψ [0]<br />
i<br />
=<br />
a T ui ζ[0] +b ui . Since i ∈ {1, . . . , m pu } is arbitrary, θ [0] = θ [1] = θ [2] and θ [0] is an extreme point<br />
<strong>of</strong> ˜Γ [54]. Now, let ˜θ = (˜ζ, ˜ψ 1 , . . . , ˜ψ mpu ) /∈ E˜Γ.<br />
Then, two cases are possible:<br />
i. ˜θ /∈ ˜Γ : ˜θ cannot be a vertex <strong>of</strong> ˜Γ since ˜Γ is a convex polytope.<br />
ii. ˜θ = (˜ζ, ˜ψ1 , . . . , ˜ψ mpu ) ∈ ˜Γ: either ˜ζ is in the interior <strong>of</strong> ∆ and consequently ˜θ cannot<br />
be a vertex <strong>of</strong> ˜Γ or ˜ζ ∈ E ∆ and all inequalities a T ˜ζ li<br />
+ b li ≤ ˜ψ i ≤ a T ˜ζ ui + b ui are actually<br />
strict. In the latter case, there exists λ ∈ (0, 1) such that ˜θ = (˜ζ, ˜ψ 1 , . . . , λ(a T ˜ζ li<br />
+b li )+<br />
(1 − λ)(a T ˜ζ ui + b ui ), . . . , ˜ψ mpu ) and therefore is not an extreme point <strong>of</strong> ˜Γ.<br />
<br />
Remarks 5.2.2.<br />
i. Proposition 5.2.2 gives one specific procedure to cover the graph <strong>of</strong> a polynomial function<br />
by a convex polytope. Further research that advances graph covering strategies and<br />
quantifies the trade-<strong>of</strong>f between the number <strong>of</strong> vertices and the volume <strong>of</strong> the covering<br />
polytope would be applicable to the robust ROA problem.<br />
ii. Proposition can be used with bounded non-polynomial functions g 1 , . . . , g mpu<br />
as affine upper and lower bounds are provided.<br />
as long<br />
⊳<br />
87
5.3 Branch-and-Bound Type Refinement in the Parameter<br />
Space<br />
The optimization problem in (5.6), when applied with D = ∆, provides a method for<br />
computing invariant subsets <strong>of</strong> the robust ROA characterized by a single Lyapunov function.<br />
Therefore, results by (5.6) may be conservative: certified invariant subset may be too small<br />
relative the robust ROA. On the other hand, a less conservative estimate <strong>of</strong> the robust ROA<br />
can be obtained by solving (5.6) for each δ ∈ ∆ with D = {δ}: β∆ ∗ , where, for a subset<br />
D ⊆ ∆, β ∗ D<br />
is defined as<br />
β<br />
∗ D := min<br />
δ∈D β {δ}, (5.11)<br />
is greater than or equal to β ∆ . 2 However, computing β ∗ ∆<br />
requires solving an optimization<br />
problem for each δ ∈ ∆, and consequently is impractical.<br />
In the following, we propose<br />
an informal “branch-and-bound” type procedure for computing lower and upper bounds<br />
for β∆ ∗ , i.e., localizing the value <strong>of</strong> β∗ ∆<br />
. The method is based on computing a different<br />
Lyapunov function for each cell <strong>of</strong> a finite partition <strong>of</strong> ∆. Therefore, it potentially leads<br />
to less conservative estimates <strong>of</strong> the robust ROA compared to directly solving (5.6) with<br />
D = ∆ and more conservative estimates compared to Ω p,β ∗<br />
∆<br />
.<br />
5.3.1 Branch-and-Bound Algorithm<br />
Branch-and-bound is an algorithmic method for global optimization. The method is<br />
based on two steps: first the search region is covered by smaller subregions (branching) and<br />
then upper and lower bounds for the objective function restricted to each subregion are<br />
computed (bounding) [40]. A formal B&B application converges to the global optimum<br />
2 Note that for a singleton {δ}, E {δ} = {δ}.<br />
88
if the difference between the upper and lower bounds uniformly converges to zero as the<br />
“size” <strong>of</strong> the subregions goes to zero. Without specific convergence guarantees, these steps<br />
are repeated refining the partition <strong>of</strong> the search region until the gap between the upper<br />
and lower bounds gets smaller than a prescribed tolerance or a maximum number <strong>of</strong> subpartitioning<br />
is reached.<br />
For a subset D ⊆ ∆, let L(D) and U(D) be lower and upper bounds for β ∗ D . For a<br />
partition D <strong>of</strong> ∆, define<br />
L D := min D∈D<br />
U D := min D∈D<br />
L(D)<br />
U(D).<br />
Then, it follows that<br />
L D ≤ β ∗ ∆ ≤ U D .<br />
Using L D and U D , the following branch-and-bound type refinement can be implemented for<br />
localizing β ∗ ∆ [5]<br />
Branch-and-Bound (B&B) Algorithm:<br />
Given an initial partition D 0 <strong>of</strong> ∆, positive<br />
integer N iter , and positive scalar ɛ > 0,<br />
• k ← 0;<br />
• compute L D k and U D k;<br />
• while k ≤ N iter and U D k − L D k > ɛ<br />
– k ← k + 1;<br />
– pick ˜D ∈ D k−1 such that β ˜D<br />
= L D k−1;<br />
– split ˜D into ˜D I and ˜D II ;<br />
– form D k from D k−1 by removing ˜D and adding ˜D I and ˜D II ;<br />
89
– compute L D k and update U D k;<br />
• return D exit := D k and corresponding lower and upper bounds.<br />
⊳<br />
Next, we discuss the generation <strong>of</strong> lower and upper bounds used in the B&B Algorithm.<br />
5.3.2 Computing Lower Bounds<br />
A lower bound L D k for β ∗ ∆ associated with the partition Dk can be obtained as L D k :=<br />
min D∈D k<br />
β D , since, for each δ ∈ ∆, there exists D ∈ D k such that δ ∈ D. Furthermore,<br />
for every D ∈ D k , β ∆ ≤ β D since D ⊆ ∆. Then, taking the minimum <strong>of</strong> β D over D ∈ D k ,<br />
we obtain the inequality β ∆ ≤ L D k. Consequently, the inequality<br />
β ∆ ≤ L D k ≤ β ∗ ∆<br />
holds.<br />
Remarks 5.3.1. If the problem in (5.6) is infeasible for some D ∈ D k , then L D k = 0.<br />
Therefore, further refinement <strong>of</strong> the partition is needed in order to get a nonzero lower<br />
bound.<br />
⊳<br />
5.3.3 Computing Upper Bounds<br />
Divergent trajectories and infeasibility <strong>of</strong> certain necessary conditions for the constraints<br />
in (5.11) provide upper bounds for β ∗ ∆ . βnc , smallest value <strong>of</strong> p attained on a non-convergent<br />
simulation trajectory, is an upper bound for β ∗ ∆ . βlp , as defined in section 5.1.1, provides an<br />
upper bound for β∆ ∗ , but only when constraints in (5.7) are imposed for a singleton δ ∈ ∆<br />
(see (5.11)).<br />
90
Remarks 5.3.2. The inequalities L D k ≤ L D k+1 ≤ β∆ ∗ hold for k = 0, 1, 2, . . . since for each<br />
D ∈ D k+1 there exists D ′ ∈ D k such that D ⊆ D ′ . When U D k<br />
is defined as the smallest<br />
value <strong>of</strong> the upper bounds found up to the k-th step <strong>of</strong> B&B Algorithm, the inequality<br />
β∆ ∗ ≤ U D k+1 ≤ U Dk holds. Therefore, the B&B Algorithm generates a sequence <strong>of</strong> upper<br />
and lower bounds for β ∗ ∆<br />
that satisfy<br />
L D k ≤ L D k+1 ≤ β ∗ ∆ ≤ U D k+1 ≤ U D k for k = 0, 1, 2, ...,<br />
and the value <strong>of</strong> β ∗ ∆<br />
is better localized as k increases.<br />
⊳<br />
5.3.4 Implementation Issues<br />
The B&B type refinement procedure with the lower bound computed through the problem<br />
in (5.6) provides a pragmatic approach for robust local stability analysis. However, as<br />
noted in section 4.3 the number <strong>of</strong> constraints in (5.6) and consequently the number <strong>of</strong> decision<br />
variables increase exponentially with m+m pu because (5.6e) contains a SOS constraint<br />
for each vertex value <strong>of</strong> the uncertainty polytope. The increase in the problem size may<br />
render the problem in (5.6) computationally challenging for even modest values <strong>of</strong> m+m pu .<br />
To partially alleviate this difficulty, we use the sequential suboptimal solution technique<br />
proposed in section 4.3 for lower bound computation along with a generalization <strong>of</strong> the<br />
simulation-aided technique from chapter 3 in the first step.<br />
Remarks 5.3.3. Let D ⊆ ∆ and D sample ⊂ D be a singleton. Then, the value β Dsample −<br />
β subopt<br />
D<br />
is always nonnegative and can be interpreted as a measure <strong>of</strong> potential improvement<br />
in the lower bound for β ∗ ∆<br />
from further sub-dividing D in the B&B refinement procedure.<br />
Therefore, it may be used as a stopping criterion in the B&B algorithm.<br />
However, we<br />
re-emphasize that β Dsample is computed solving a nonconvex optimization problem. ⊳<br />
91
Finally, the following continuity result (based on a stricter version <strong>of</strong> Proposition 5.1.1<br />
from chapter 4 - namely Propositions 4.2.2 and 4.2.1- which replaces ∇V (x)f(x) < 0 by<br />
∇V (x)f(x) < µV (x) for some positive scalar µ) is a first step toward a convergence analysis<br />
<strong>of</strong> the implementation <strong>of</strong> the B&B algorithm for estimating the robust ROA.<br />
Lemma 5.3.1. Let µ > 0 and 0 < γ < 1 be given. Let V 0 be a positive definite quadratic<br />
polynomial, s 20 ∈ Σ[x] be a quadratic polynomial, and s 30 > 0 such that<br />
b(x) := − [(1 − V 0 )s 20 + (∇f 0 + µV 0 )s 30 + l 2 ] ∈ Σ[x].<br />
Then, there exists ε > 0 and α > 0 such that<br />
[<br />
(<br />
)<br />
]<br />
m∑<br />
b δ (x) := − (γ − V 0 )(s 20 + αx T x) + (∇ f 0 + δ i f i + µ 2 V 0)s 30 + l 2 ∈ Σ[x]<br />
i=1<br />
for all δ ∈ R m with δ i ∈ [−ε, ε].<br />
⊳<br />
Pro<strong>of</strong>. b δ (x) can be written as<br />
b δ (x)<br />
= b(x) + ˜b δ (x)<br />
= b(x) + [ (1 − γ)s 20 (x) − γαx T x + αV 0 (x)x T x − ∑ m<br />
i=1 δ i∇V 0 (x)f i (x)s 30 + µ 2 V (x)s 30]<br />
.<br />
Using ideas from the pro<strong>of</strong>s <strong>of</strong> Lemmas 3.5.1 and Proposition 3.5.1, it can be shown that<br />
there exist α > 0 and ε > 0 such that ˜b δ ∈ Σ[x] for all δ ∈ R m with δ i ∈ [−ε, ε].<br />
<br />
5.4 Examples<br />
In all examples, l 1 (x) = l 2 (x) = 10 −6 x T x and p(x) = x T x.<br />
92
1<br />
0.8<br />
0.6<br />
φ<br />
0.4<br />
0.2<br />
0<br />
0 0.2 0.4<br />
δ<br />
0.6 0.8 1<br />
Figure 5.1. Polytopic cover for {(ζ, ψ) ∈ R 2 : ζ ∈ [0, 1], ψ = ζ 2 } with 2 cells (red) and 4<br />
cells (yellow). Black curve is the set {(ζ, ψ) ∈ R 2 : ζ ∈ [0, 1], ψ = ζ 2 }.<br />
5.4.1 An Example From the Literature<br />
Consider the system [19] governed by<br />
ẋ =<br />
⎡<br />
⎢<br />
⎣<br />
⎤ ⎡<br />
−x 1<br />
⎥<br />
⎦ + ⎢<br />
⎣<br />
3x 1 − 2x 2<br />
⎤ ⎡<br />
−6x 2 + x 2 2 + x3 1<br />
⎥<br />
⎦ δ + ⎢<br />
⎣<br />
−10x 1 + 6x 2 + x 1 x 2<br />
⎤<br />
4x 2 − x 2 2<br />
⎥<br />
⎦ δ2 ,<br />
12x 1 − 4x 2<br />
where the uncertain scalar parameter δ takes values in [0, 1]. Following the procedure from<br />
section 5.2, we replaced δ 2 by an artificial parameter φ. With the initial partition {[0, 1]},<br />
we applied the refinement procedure for ∂(V ) = 2 and ∂(V ) = 4. Figure 5.1 shows the<br />
polytopic cover for {(ζ, ψ) ∈ R 2 : ζ ∈ [0, 1], ψ = ζ 2 } with 2 cells (red) and 4 cells<br />
(yellow). Upper and lower bounds for β[0,1] ∗ are shown in Figure 5.2 (top for ∂(V ) = 2<br />
and bottom for ∂(V ) = 4). Curves with “◦” are for the lower bounds obtained by directly<br />
solving (5.6) with D taken as the vertices <strong>of</strong> the corresponding cell, curves with “⋄” are for<br />
the lower bounds obtained by applying the sequential procedure from section 4.3 by taking<br />
∆ sample (in the first step) as the center <strong>of</strong> the corresponding cell, and curves with “×” (in<br />
the top figure only) and “⋆” show β nc and β lp , respectively.<br />
93
1<br />
β lp<br />
β nc<br />
β<br />
0.5<br />
0<br />
1<br />
lower bounds<br />
10 20 30 40<br />
number <strong>of</strong> iterations<br />
50<br />
β nc<br />
β<br />
0.5<br />
0<br />
lower<br />
bounds<br />
20 40 60<br />
number <strong>of</strong> iterations<br />
Figure 5.2. Curves with “◦” are for the lower bounds obtained by directly solving (5.6) with<br />
D taken as the vertices <strong>of</strong> the corresponding cell, curves with “⋄” are for the lower bounds<br />
obtained by applying the sequential procedure from section 4.3 by taking ∆ sample (in the<br />
first step) as the center <strong>of</strong> the corresponding cell, and curves with “×” (in the top figure<br />
only) and “⋆” show β nc and β lp , respectively.<br />
94
1.5<br />
1<br />
x 2<br />
0.5<br />
0<br />
−0.5<br />
−1<br />
−1 −0.5 0 0.5 1<br />
x 1<br />
Figure 5.3. Estimates <strong>of</strong> the robust ROA: from [19] (black), using the branch-and-bound<br />
based method for ∂(V ) = 2 (red) and ∂(V ) = 4 (green).<br />
5.4.2 Controlled Short Period Aircraft Dynamics<br />
We apply the branch-and-bound type refinement procedure for the uncertain controlled<br />
short period pitch axis model <strong>of</strong> an aircraft used in Example 4.5.2 with ∂(V ) = 2 and<br />
∂(V ) = 4 using the two-step implementation from section 5.3.4 on a computer cluster with<br />
9 processors: after the first B&B iteration, the cell with the smallest lower bound for β∆ ∗ is<br />
subdivided into 3 subcells and cells with 2-nd, 3-rd, and 4-th smallest lower bounds for β ∗ ∆<br />
are sub-divided into 2 subcells. Figure 5.4 shows the lower bounds for β ∗ ∆<br />
with ∂(V ) = 2<br />
(green solid curve with “×” marker) and ∂(V ) = 4 (blue solid curve with “⋄” marker) and<br />
β nc (red solid curve with “◦” marker) computed at the centers <strong>of</strong> the cells generated by<br />
the B&B Algorithm for the ∂(V ) = 4 run. Dashed curves are for (computed values <strong>of</strong>)<br />
β {δ} where δ is the center <strong>of</strong> the cell with the smallest lower bound at the corresponding<br />
step <strong>of</strong> the B&B refinement procedure for ∂(V ) = 2 (green curve with “×” marker) and<br />
∂(V ) = 4 (blue curve with “⋄” marker).<br />
Figure 5.5 shows the final partition generated<br />
by the B&B algorithm for the ∂(V ) = 4 run. β nc , smallest value <strong>of</strong> p attained on non-<br />
95
15<br />
β10<br />
"quasi−upper" and lower bounds for ∂(V) = 4<br />
β nc<br />
5<br />
"quasi−upper" and lower bounds for ∂(V) = 2<br />
0<br />
0 1 2 3 4 5 6<br />
number <strong>of</strong> B&B steps<br />
Figure 5.4. Lower bounds for β∆ ∗ with ∂(V ) = 2 (green solid curve with “×” marker) and<br />
∂(V ) = 4 (blue solid curve with “⋄” marker) and β nc (red solid curve with “◦” marker)<br />
computed at the centers <strong>of</strong> the cells generated by the B&B Algorithm for the ∂(V ) = 4<br />
run. Dashed curves are for (computed values <strong>of</strong>) β {δ} where δ is the center <strong>of</strong> the cell with<br />
the smallest lower bound at the corresponding step <strong>of</strong> the B&B refinement procedure for<br />
∂(V ) = 2 (green curve with “×” marker) and ∂(V ) = 4 (blue curve with “⋄” marker).<br />
convergent simulation trajectories, is 8.56 and obtained for (δ 1 , δ 2 ) = (2.039, −0.099) and<br />
the initial condition (0.17, 2.65, −0.10, 1.24).<br />
5.4.3 Controlled Short Period Aircraft Dynamics with Unmodeled Dynamics<br />
Consider the closed-loop dynamics in Figure 5.6 where uncertain first-order dynamics<br />
are introduced between the controller output (v) and the plant input (u) from section 5.4.2<br />
u(s) = 1.25 + G(s, δ 3 , δ 4 )v(s) =<br />
[<br />
]<br />
s − δ 4<br />
1.25 + 0.75δ 3 v(s). (5.12)<br />
s + δ 4<br />
Here, δ 3 ∈ [−1, 1] and δ 4 ∈ [10 −2 , 10 2 ] are uncertain parameters and G(s, δ 3 , δ 4 ) is introduced<br />
to examine the effect <strong>of</strong> unmodeled dynamics on the ROA. Let ẋ 5 = −δ 4 x 5 − δ 4 v and<br />
u = 1.5δ 3 x 5 + (1.25 + 0.75δ 3 )v be a realization <strong>of</strong> G and x = [ x T p x 4 x 5<br />
] T<br />
denote the<br />
96
0.1<br />
0.05<br />
δ 2<br />
0<br />
−0.05<br />
−0.1<br />
1 1.2 1.4 1.6 1.8 2<br />
δ 1<br />
Figure 5.5. Final partition generated by the B&B algorithm for the ∂(V ) = 4 run.<br />
✲ 0.75δ3<br />
s−δ 4<br />
s+δ 4<br />
✲<br />
ẋ 4 = A c x 4 + B c y<br />
v = C c x 4<br />
v<br />
✲ 1.25<br />
✲• ❄u<br />
ẋ<br />
✲ p = f p (x p , δ p ) + B(x p , δ p )u y<br />
+<br />
y = [x 1 x 3 ] T<br />
Figure 5.6. Controlled short period aircraft dynamics with uncertain first order linear timeinvariant<br />
dynamics (δ p := (δ 1 , δ 2 )).<br />
state <strong>of</strong> the closed-loop dynamics. We apply the B&B based refinement procedure with<br />
∂(V ) = 2 for two cases: (i) (δ 1 , δ 2 ) set to nominal values (1.52, 0), (ii) δ 1 and δ 2 are treated<br />
as uncertain parameters with the bounds δ 1 ∈ [0.99, 2.05] and δ 2 ∈ [−0.1, 0.1]. Note that in<br />
case (ii) the uncertain parameters appear in the vector field as δ 1 , δ 2 , δ 3 , δ 4 , δ 1 δ 3 , δ 2 δ 3 , and<br />
δ1 2. Consequently, the covering polytopes are in R7 with up to 128 (distinct) vertices. In<br />
case (i), Ω p,4.90 is shown to be in the robust ROA, whereas, in case (ii), Ω p,2.80 is certified<br />
to be in the robust ROA.<br />
97
5.5 Chapter Summary<br />
We extended the applicability <strong>of</strong> the method proposed in the previous chapter to compute<br />
invariant subsets <strong>of</strong> the region-<strong>of</strong>-attraction for the asymptotically stable equilibrium<br />
points <strong>of</strong> polynomial dynamical systems with bounded parametric uncertainty:<br />
• <strong>Systems</strong> with non-affine uncertainty dependence can now be handled.<br />
• Conservatism associated with using a common Lyapunov function for an entire family<br />
<strong>of</strong> uncertain systems has been reduced by a branch-and-bound type refinement<br />
procedure in the uncertain parameter space.<br />
The approach put forth here <strong>of</strong>fers some advantages:<br />
• The parameter-dependent Lyapunov functions achieved by uncertainty-space partitioning<br />
do not require an a priori parametrization <strong>of</strong> the Lyapunov function in the<br />
uncertain parameters. Conservatism (due to parameter-independent Lyapunov functions)<br />
is simply reduced by partitioning the uncertainty set.<br />
• It leads to optimization problems with smaller semidefiniteness constraints since uncertain<br />
parameters do not explicitly appear in the constraints (see section 5.1). Although<br />
the size <strong>of</strong> the semidefinite programming constraints does not increase with<br />
the number <strong>of</strong> uncertain parameters, their number does and the problem becomes<br />
challenging as the number <strong>of</strong> uncertain parameters increases.<br />
• A sequential implementation (similar to section 4.3) for computing suboptimal solutions<br />
which decouples these constraints into smaller, independent problems arises<br />
98
naturally. This is suitable for trivial parallel computing <strong>of</strong>fering a major advantage<br />
over approaches utilizing parameter-dependent Lyapunov functions.<br />
99
Chapter 6<br />
Reachability and <strong>Local</strong> Gain<br />
<strong>Analysis</strong> for <strong>Nonlinear</strong> Dynamical<br />
<strong>Systems</strong><br />
We consider the problem <strong>of</strong> computing upper bounds for reachable sets and local inputto-output<br />
gains <strong>of</strong> nonlinear dynamical systems with polynomial vector fields. Similar problems<br />
were studied in [37, 60, 50, 23, 77]. Following [37, 60], we characterize upper bounds<br />
<strong>of</strong> reachable sets and local input-output gains due to bounded L 2 and/or L ∞ disturbances<br />
using Lyapunov/storage functions and formulate bilinear SOS programming problems to<br />
estimate these bounds (with certificates).<br />
Motivated by the difficulties associated with<br />
bilinear programming, we adapt the simulation data based relaxations (see chapter 3) to<br />
formal pro<strong>of</strong> construction for reachability and local gain analysis. We extend the upper<br />
bound refinement procedure proposed in [60] in the context <strong>of</strong> reachability to L 2 → L 2<br />
gain analysis and provide a general pro<strong>of</strong>. Finally, we propose a local small-gain theorem<br />
100
which enables (robust) region-<strong>of</strong>-attraction analysis based on Lyapunov/storage functions<br />
constructed for individual systems in an interconnection <strong>of</strong> systems.<br />
6.1 Upper and Lower Bounds for the Reachable Set and <strong>Local</strong><br />
Input-Output Gains<br />
Consider the nonlinear dynamical system<br />
ẋ(t) = f(x(t), w(t)), (6.1)<br />
where x(t) ∈ R n , w(t) ∈ R nw , and f is a n-vector with elements in R[(x, w)] such that<br />
f(0, 0) = 0. Let φ(t; x 0 , w) denote the solution to (6.1) at time t with the initial condition<br />
x(0) = x 0 driven by the input/disturbance w. For a piecewise continuous map u : [0, ∞) →<br />
R m , define the L 2 and L ∞ norms, respectively, as<br />
√ ∫ ∞<br />
‖u‖ 2 :=<br />
0<br />
u(t) T u(t)dt<br />
‖u‖ ∞<br />
:= sup t≥0 ‖u(t)‖.<br />
6.1.1 Upper Bounds for the Reachable Set<br />
For ‖w‖ 2 ≤ R, the set G R 2<br />
<strong>of</strong> points reachable from the origin under (6.1) is defined as<br />
G R 2 := { φ(T ; 0, w) ∈ R n : T ≥ 0, ‖w‖ 2 2 ≤ R 2} .<br />
Lemma 6.1.1 adapted from the Lyapunov-like argument in [13§6.1.1] provides a characterization<br />
<strong>of</strong> sets containing G R 2 [37, 60].<br />
Lemma 6.1.1. If there exists a continuously differentiable function V such that<br />
V (x) > 0 for all x ∈ R n \{0} with V (0) = 0, and (6.2)<br />
∇V f(x, w) ≤ w T w for all x ∈ Ω V,R 2, w ∈ R nw , (6.3)<br />
101
then G R 2 ⊆ Ω V,R 2.<br />
⊳<br />
Given this characterization for upper bounds for the reachable set, one may ask two<br />
questions: (i) For given R such that ‖w‖ 2 ≤ R, what is a tight upper bound for G R 2, and<br />
(ii) given β > 0 and positive definite function p, what is the largest value <strong>of</strong> R such that<br />
G R 2 ⊆ Ω p,β We will focus on the second question and consider the optimization<br />
Rreach,opt 2 (V, β) := max<br />
R 2 >0,V ∈V R2 subject to<br />
(6.2), (6.3), and Ω V,R 2 ⊆ Ω p,β ,<br />
(6.4)<br />
where V denotes the set <strong>of</strong> candidate Lyapunov functions over which the maximum is computed.<br />
In order to make the problem in (6.4) amenable to numerical optimization (specifically<br />
SOS optimization), we restrict V and p to be polynomials <strong>of</strong> some fixed degree. Using<br />
the generalized S-procedure and SOS sufficient conditions for polynomial nonnegativity, a<br />
lower bound on R reach,opt (V, β) can be determined through a SOS programming problem.<br />
Proposition 6.1.1. Let β > 0, l 1 be a positive definite polynomial satisfying l 1 (0) = 0,<br />
R reach be defined by<br />
R 2 reach (V poly, S, β) :=<br />
max R 2 subject to (6.5)<br />
V ∈V poly ,R 2 >0,s 1 ∈S 1 ,s 2 ∈S 2<br />
V (0) = 0, s 1 ∈ Σ[x], and s 2 ∈ Σ[(x, w)], (6.6)<br />
V − l 1 ∈ Σ[x], (6.7)<br />
(β − p) − (R 2 − V )s 1 ∈ Σ[x], (6.8)<br />
− [ (R 2 − V )s 2 + ∇V f(x, w) − w T w ] ∈ Σ[(x, w)], (6.9)<br />
where V poly ⊂ V and S 1 are prescribed finite-dimensional subsets <strong>of</strong> R[x]. Then,<br />
R reach (V poly , S 1 , β) ≤ R reach,opt (V, β).<br />
⊳<br />
102
6.1.2 Upper Bounds for the <strong>Local</strong> L 2 → L 2 Gain<br />
Consider the dynamical system governed by<br />
ẋ(t) = f(x(t), w(t)),<br />
z(t) = h(x(t)),<br />
(6.10)<br />
where x, w, and f are as before and h is an n z -vector <strong>of</strong> polynomials such that h(0) = 0.<br />
We use the following lemma characterizing an upper bound for the w to z induced L 2 → L 2<br />
gain <strong>of</strong> the system (6.10).<br />
Lemma 6.1.2. [60] If there exists a real scalar γ > 0 and a continuously differentiable<br />
function V such that<br />
V (0) = 0 and V (x) ≥ 0, (6.11)<br />
∇V f(x, w) ≤ w T w − γ −2 z T z ∀x ∈ Ω V,R 2 and w ∈ R nw , (6.12)<br />
then the system in (6.10) with x(0) = 0 satisfies ‖z‖ 2 ≤ γR whenever ‖w‖ 2 ≤ R.<br />
⊳<br />
Similar to the reachability analysis, one may ask two related questions: (i) For given R<br />
such that ‖w‖ 2 ≤ R, what is a tight upper bound for the L 2 → L 2 gain, and (ii) given γ,<br />
what is the largest value <strong>of</strong> R such that ‖z‖ 2 ≤ γR whenever ‖w‖ 2 ≤ R and x(0) = 0 We<br />
use Proposition 6.1.2 to address the second question (the first one can be similarly handled).<br />
Proposition 6.1.2. For given γ > 0, let R L2<br />
be defined by<br />
R 2 L 2<br />
(V poly , S, γ) :=<br />
max R 2 subject to (6.13)<br />
V ∈V poly ,R 2 >0,s 1 ∈S 1<br />
V (0) = 0, s 1 ∈ Σ[(x, w)], (6.14)<br />
V ∈ Σ[x], (6.15)<br />
− [ (R 2 − V )s 1 + ∇V f(x, w) − w T w + γ −2 z T z ] ∈ Σ[(x, w)], (6.16)<br />
103
where V poly and S’s are prescribed finite-dimensional subsets <strong>of</strong> R[x]. Then, R L2 (V poly , S, γ)<br />
is a lower bound for the largest value <strong>of</strong> R such that ‖z‖ 2 ≤ γR whenever ‖w‖ 2 ≤ R and<br />
x(0) = 0.<br />
⊳<br />
Remarks 6.1.1. Incorporating L ∞ norm bounds on the disturbance (in addition to L 2<br />
norm bounds) in reachability and L 2 → L 2 gain analysis only requires a straightforward<br />
application <strong>of</strong> the generalized S-procedure.<br />
For example, for ρ > 0, if the conditions in<br />
Proposition 6.1.1, except that (6.9) is replaced by<br />
− [ (R 2 − V )s 2 + ∇V f(x, w) − w T w ] − s 3 (ρ − w T w) ∈ Σ[(x, w)],<br />
where s 3 ∈ Σ[(x, w)], hold, then for x(0) = 0 and all T ≥ 0,<br />
x(T ) ∈ {φ(t; 0, w) ∈ R n : ‖w‖ 2 ≤ R, ‖w‖ ∞ ≤ ρ, t ≥ 0} .<br />
Similarly, when the conditions <strong>of</strong> Proposition 6.1.2 holds except that (6.16) is replaced by<br />
− [ (R 2 − V )s 1 + ∇V f(x, w) − w T w + γ −2 z T z ] − s 2 (ρ − w T w) ∈ Σ[(x, w)],<br />
where s 2 ∈ Σ[(x, w)], then, for x(0) = 0, ‖w‖ ∞ ≤ ρ and ‖w‖ 2 ≤ R imply that ‖z‖ 2 ≤ γR. ⊳<br />
6.1.3 Upper Bounds for the <strong>Local</strong> L ∞ → L ∞ Gain<br />
gain.<br />
The following result, reminiscent <strong>of</strong> [77, 50], provides an upper bound for L ∞ → L ∞<br />
Lemma 6.1.1. For ρ > 0 and ɛ > 0, if there exist a real scalar γ > 0 and a continuously<br />
differentiable function V such that V (0) < 1 and<br />
∇V f(x, w) ≤ −ɛ ∀x ∈ {x ∈ R n : V (x) = 1} , ∀w ∈ W, (6.17)<br />
Ω V = {x ∈ R n : V (x) ≤ 1} ⊆ {x ∈ R n : |h(x)| ≤ ργ} , (6.18)<br />
104
where W := {w ∈ R nw : ‖u‖ ∞ ≤ ρ} , then for the system in (6.10) with x(0) ∈ Ω V ,<br />
‖z‖ ∞ ≤ γρ whenever ‖w‖ ∞ ≤ ρ.<br />
⊳<br />
By the generalized S-procedure,<br />
−ɛ − ∇V f(x, w) − r(1 − V ) − ∑ n w<br />
i=1<br />
s i (ρ 2 − wi 2 ) ∈ Σ[(x, w)], (6.19)<br />
(γ 2 ρ 2 − h 2 ) − s 0 (1 − V ) ∈ Σ[x], (6.20)<br />
s 0 ∈ Σ[x], s 1 , . . . , s nw ∈ Σ[(x, w)], (6.21)<br />
r ∈ R[x], V ∈ R[x], V (0) < 1 (6.22)<br />
are sufficient for the conditions in Lemma 6.1.1.<br />
Remarks 6.1.2. Let E W be the set <strong>of</strong> extreme points <strong>of</strong> W. When f(x, w) is affine in w<br />
(for fixed x),<br />
−ɛ − ∇V f(x, w) − r w (1 − V ) ∈ Σ[x] for all w ∈ E W ,<br />
where r w ∈ R[x], is also a sufficient condition for (6.17) which does not depend explicitly on<br />
w but there is one SOS condition for each vertex. This condition is suitable for adaptations<br />
<strong>of</strong> the sequential suboptimal solution strategy introduced in section 4.3.<br />
⊳<br />
6.1.4 Lower Bounds<br />
We now discuss lower bounds on reachable sets and local input-output gains. These<br />
lower bounds will be used to assess the suboptimality <strong>of</strong> upper bound computations.<br />
105
Lower bound for the reachable set:<br />
Fix β > 0 and let R 2 reach (V poly, S, β) be as defined<br />
in (6.5)-(6.9). Then, for any positive T, it follows that<br />
{<br />
max<br />
w<br />
≤ max<br />
w<br />
∫ T<br />
p(x(t)) : x(0) = 0 and<br />
0<br />
{<br />
p(x(t)) : x(0) = 0 and<br />
w(t) T w(t)dt ≤ R 2 reach , 0 ≤ t ≤ T }<br />
∫ ∞<br />
0<br />
w(t) T w(t)dt ≤ R 2 reach , t ≥ 0 }<br />
(6.23)<br />
(6.24)<br />
≤ β. (6.25)<br />
Lower bound for L 2 → L 2 gain:<br />
Fix γ > 0 and let R 2 L 2<br />
(V poly , S, γ) be as defined in<br />
(6.1.2). Then, for any positive T, it follows that<br />
{∫ ∞<br />
max<br />
w<br />
≤ max<br />
w<br />
0<br />
z(t) T z(t)dt :<br />
{∫ ∞<br />
z(t) T z(t)dt :<br />
0<br />
x(0) = 0 and<br />
x(0) = 0 and<br />
∫ T<br />
w(t) T w(t)dt ≤ R 2 L 2<br />
}<br />
0<br />
∫ ∞<br />
0<br />
w(t) T w(t)dt ≤ R 2 L 2<br />
}<br />
(6.26)<br />
(6.27)<br />
≤ γ 2 R 2 L 2<br />
. (6.28)<br />
Remarks 6.1.3.<br />
i. Lower bounds for the L ∞ → L ∞ gain can be similarly defined.<br />
ii. Problems in (6.23) and (6.26) are optimal control problems which can be attacked<br />
by any nonlinear programming solver [42, 29] after appropriate parametrization <strong>of</strong><br />
the input and/or the state.<br />
On the other hand, [60] adapted an iterative scheme<br />
from [64] for computing solutions to these optimal control problems based on the first<br />
order conditions for optimality [16]. In the following section, we use this technique to<br />
compute reported lower bounds.<br />
⊳<br />
106
6.2 Upper Bound Refinement for Reachability and L 2 → L 2<br />
Gain <strong>Analysis</strong><br />
Conditions in (6.2)-(6.3) provide a relation between L 2 norms <strong>of</strong> the disturbance signals<br />
and reachable sets due to these disturbance signals in terms <strong>of</strong> an appropriate Lyapunov<br />
function V .<br />
However, this relation may be conservative because ∇V f(x, w) ≤ w T w is<br />
required to hold everywhere in Ω V,R 2. To reduce this conservatism, a refinement procedure<br />
which, once V is determined solving (6.13)-(6.16), partitions Ω V,R 2<br />
into smaller annular<br />
regions and analyzes V in each subregion separately imposing potentially weaker conditions<br />
[60]. Here, we provide an independent and more general pro<strong>of</strong> and also an extension for<br />
L 2 → L 2 gain analysis. To this end, let g : R → R be a piecewise continuous function<br />
satisfying 0 < g(τ) ≤ 1 for all τ ≥ 0 and suppose that<br />
∇V f(x, w) ≤ g(V (x))w T w, ∀x ∈ Ω V,R 2, ∀w ∈ R nw . (6.29)<br />
Define<br />
and R ext by<br />
Then, R 2 ext ≥ R 2 ,<br />
K(x) :=<br />
R 2 ext :=<br />
∇K(x) =<br />
∫ V (x)<br />
0<br />
∫ R 2<br />
0<br />
1<br />
dτ (6.30)<br />
g(τ)<br />
1<br />
dτ. (6.31)<br />
g(τ)<br />
1<br />
∇V (x),<br />
g(V (x))<br />
{x ∈ R n : V (x) ≤ R 2 } = {x ∈ R n : K(x) ≤ R 2 ext},<br />
and K is a positive semidefinite differentiable function (vanishing only where V vanishes)<br />
satisfying<br />
∇Kf(x, w) ≤ w T w, ∀x ∈ Ω K,R 2<br />
ext<br />
, ∀w ∈ R nw .<br />
107
Consequently, this strengthens the reachability result by certifying a larger bound on the<br />
admissible disturbance signals: if x(0) = 0, then,<br />
‖w‖ 2 ≤ R ext ⇒ x(t) ∈ Ω K,R 2<br />
ext<br />
= Ω V,R 2 ⊆ Ω p,β for all t ≥ 0.<br />
Remarks 6.2.1. Effectively, K is a non-polynomial Lyapunov/storage function constructed<br />
via the polynomial Lyapunov/storage function V.<br />
⊳<br />
For given V and R, search for g satisfying (6.29) can be formulated as a sequence <strong>of</strong><br />
SOS programming problems. To this end, we restrict g to be piecewise constant 1 . Let<br />
m > 0 be an integer, define ɛ := R 2 /m, and partition the set Ω R 2 ,V<br />
into m subregions<br />
Ω V,R 2 ,k := {x ∈ R n : (k − 1)ɛ ≤ V (x) ≤ kɛ} for k = 1, . . . , m.<br />
If<br />
∇V f(x, w) ≤ g k w T w for all x ∈ Ω R 2 ,V,k, w ∈ R nw , (6.32)<br />
holds for some g k > 0, then for any k ≤ m, the system in (6.1) with piecewise continuous<br />
w starting from the origin satisfies<br />
∫ T<br />
0<br />
( 1<br />
w T w dt < ɛ + · · · + 1 )<br />
⇒ V (φ(T ; 0, w)) ≤ kɛ.<br />
g 1 g k<br />
Note that (6.32) already holds with g k = 1. Therefore, it may be possible to make ɛ(1/g 1 +<br />
. . . + 1/g k ) (in particular k = m) greater than R 2 k/m (in particular R 2 ) by minimizing g k<br />
such that (6.32) holds. A sufficient condition for (6.32) is the existence <strong>of</strong> SOS polynomials<br />
s 1k and s 2k such that<br />
g k w T w − ∇V f(x, w) − s 1k (kɛ − V ) − s 2k (V − (k − 1)ɛ) ∈ Σ[(x, w)].<br />
Similar ideas apply to the L 2 → L 2 gain analysis where (6.29) is replaced by<br />
∇V f(x, w) ≤ g(V (x))(w T w − γ −2 z T z), ∀x ∈ Ω V,R 2, ∀w ∈ R nw (6.33)<br />
1 Same procedure can be applied using piecewise polynomial g instead <strong>of</strong> piecewise constant.<br />
108
Let K and R ext be as defined before. Then, (6.33) is equivalent to<br />
∇Kf(x, w) ≤ w T w − γ −2 z T z, ∀x ∈ Ω K,R 2<br />
ext<br />
, ∀w ∈ R nw ,<br />
and it follows that, starting from x(0) = 0,<br />
‖w‖ 2 ≤ R ext ⇒ ‖z‖ 2 ≤ γR ext .<br />
6.3 Simulation-Based Relaxation for the Bilinear SOS Problem<br />
in Reachability <strong>Analysis</strong><br />
The usefulness <strong>of</strong> using simulation data in local stability analysis was demonstrated in<br />
chapter 3 and [73] where a methodology that incorporates simulation data in formal pro<strong>of</strong><br />
construction for estimating regions-<strong>of</strong>-attraction is proposed. In this section, we propose a<br />
similar technique for reachability analysis for nonlinear dynamical systems with polynomial<br />
vector fields. To this end, for given β > 0 and disturbance level R 2 > 0, let us ask the<br />
question whether the reachable set G R 2<br />
is contained in the set Ω p,β . Certainly, just one<br />
disturbance signal w (with ‖w‖ 2 ≤ R) that leads to a system trajectory on which p takes<br />
on a value larger than β certifies that G R 2<br />
Ω p,β . Conversely, a large collection <strong>of</strong> input<br />
signals w with ‖w‖ 2 2 ≤ R2 that do not drive the system out <strong>of</strong> Ω p,β hints to the likelihood<br />
that indeed G R 2 ⊆ Ω p,β . In this latter case, let W be a finite collection <strong>of</strong> signals<br />
⎧<br />
⎫<br />
⎪⎨<br />
w ∈ L 2 [0, ∞), ‖w‖ 2 2<br />
W := (w, x) :<br />
≤ R2 ⎪⎬<br />
.<br />
⎪⎩<br />
p(φ(t; 0, w)) ≤ β, ∀t ≥ 0<br />
⎪⎭<br />
With β and R 2 fixed, the set <strong>of</strong> Lyapunov functions which certify that G R 2<br />
⊆ Ω p,β , using<br />
conditions (6.7)-(6.9), is simply<br />
{V ∈ R[x] : (6.7) − (6.9) hold for some s i ∈ Σ[x]} .<br />
109
Of course, this set could be empty, but it must be contained in the convex set {V<br />
∈<br />
R[x] : (6.34) holds}, where<br />
l 1 (x(t)) ≤ V (x(t)),<br />
V (x(t)) ≤ R 2 ,<br />
∇V (x(t))f(x(t), w(t)) ≤ w(t) T w(t)<br />
(6.34a)<br />
(6.34b)<br />
(6.34c)<br />
for all (w, x) ∈ W and t ≥ 0.<br />
6.3.1 Affine Relaxation<br />
Let V be linearly parameterized as<br />
V := {V ∈ R[x] : V (x) = ϕ(x) T α},<br />
where α ∈ R n b<br />
and ϕ is n b -dimensional vector <strong>of</strong> polynomials in x. Given ϕ(x), constraints<br />
in (6.34) can be viewed as constraints on α ∈ R n b<br />
yielding the convex set<br />
{α ∈ R n b<br />
: (6.34) holds for V = ϕ(x) T α}.<br />
For each (w, x) ∈ W, let T w be a finite subset <strong>of</strong> the interval [0, ∞). A polytopic outer bound<br />
for this set described by finitely many constraints is Y sim := {α ∈ R n b<br />
: (6.35) holds},<br />
where<br />
l 1 (x(t)) ≤ ϕ(x(t)) T α,<br />
ϕ(x(t)) T α ≤ R 2 ,<br />
∇ϕ(x(t)) T αf(x(t), w(t)) ≤ w(t) T w(t)<br />
(6.35a)<br />
(6.35b)<br />
(6.35c)<br />
for all (w, x) ∈ W and t ∈ T w .<br />
110
Additionally, define Y sim and Y SOS as Y lin := {α ∈ R n b<br />
: Q = Q T ≻ 0 and Lin(Q) ≼<br />
0}, where, for some λ > 1,<br />
Lin(Q) :=<br />
⎡<br />
⎢<br />
⎣<br />
∇ x f(0, 0) T Q + Q∇ x f(0, 0) Q∇ w f(0, 0)<br />
∇ w f(0, 0) T Q<br />
Q T = Q ≻ 0 be such that x T Qx is the quadratic part <strong>of</strong> V, and Y SOS := {α ∈<br />
−λI<br />
⎤<br />
⎥<br />
⎦ ,<br />
R n b<br />
: (6.7) holds}. Since Y sim , Y lin and Y SOS are convex,<br />
Y := Y sim ∩ Y lin ∩ Y SOS<br />
is a convex set in R n b.<br />
Equations (6.35) and LinQ ≤ 0 constitute a set <strong>of</strong> necessary<br />
conditions for (6.7)-(6.9); thus, we have<br />
Y ⊇ B := {α ∈ R n b<br />
: ∃s 1 ∈ Σ[x], s 2 ∈ Σ[(x, w)] such that (6.7) − (6.9) hold}.<br />
With this definition <strong>of</strong> the convex outer-bound set Y, the algorithms introduced in section<br />
3.2.3 can be used with minor modifications.<br />
6.3.2 Algorithms<br />
Since an appropriate (feasible and not too conservative) value <strong>of</strong> R 2 is not known a priori,<br />
an iterative strategy to simulate and collect trajectories is necessary. This process when<br />
coupled with the H&R algorithm constitutes the Lyapunov function candidate generation.<br />
Simulation and Lyapunov function generation (SimLF G) algorithm:<br />
Given positive<br />
definite convex p ∈ R[x], a vector <strong>of</strong> polynomials ϕ(x), positive constants β, R 2 , N SIM<br />
(integer), N V (integer), σ shrink ∈ (0, 1), and empty set W.<br />
i. Generate N SIM input signals w with ‖w‖ 2 2 ≤ R2 and integrate (6.1) from the origin.<br />
111
ii. If all trajectories stay in Ω p,β , add (w, x) to W and go to step (iii). Otherwise, set R 2<br />
to σ shrink R 2 and go to step (i).<br />
iii. Find a feasible point for (6.7), (6.35) and LinQ ≤ 0. If (6.7), (6.35) and LinQ ≤ 0 are<br />
infeasible, set R 2 = σ shrink R 2 , and go to step (i). Otherwise, go to step (iv).<br />
iv. Generate N V Lyapunov function candidates using H&R algorithm, and return R 2<br />
and Lyapunov function candidates.<br />
⊳<br />
Step (i) <strong>of</strong> SimLFG algorithm requires to generate input signals w with ‖w‖ 2 2 ≤ R2 . In the<br />
current implementation <strong>of</strong> this algorithm, we use randomly generated piecewise constant<br />
input signals and signals generated by the lower bound computation using the given shape<br />
factor p and additional randomly generated shape factors. The suitability <strong>of</strong> a Lyapunov<br />
function candidate is assessed by solving the optimization problem<br />
Affine Problem: Given V ∈ R[x] (from SimLF G algorithm), p ∈ R[x], and β, define<br />
R 2 L :=<br />
max R 2 subject to<br />
R 2 >0,s 1 ∈S 1 ,s 2 ∈S 2<br />
s 1 ∈ Σ[x], s 2 ∈ Σ[(x, w)], R 2 > 0,<br />
(β − p) − (R 2 − V )s 1 ∈ Σ[x],<br />
(6.36)<br />
− [ (R 2 − V )s 2 + ∇V f(x, w) − w T w ] ∈ Σ[(x, w)].<br />
Although RL 2 depends on the allowable degree <strong>of</strong> s 1 and s 2 , this is not explicitly notated.<br />
Note that for given V and fixed R 2 the problem in (6.36) is affine in the multipliers and<br />
can be solved by (relatively) efficient linear SDP solvers such as [56] via a line search on<br />
the parameter R 2 .<br />
Assuming Affine Problem is feasible, it is true that G R 2<br />
L<br />
⊆ Ω V,R 2<br />
L<br />
⊆ Ω p,β . The solution to<br />
Affine Problem provides a feasible point for the problem in (6.5). This feasible point can be<br />
112
further improved by computing suboptimal solutions for the problem in (6.5) using iterative<br />
coordinate-wise affine optimization schemes based on solving (6.6)-(6.9) alternatingly for V<br />
with fixed multipliers and vice versa. However, as noted in the previous paragraph, this<br />
requires a line search on R 2 at each step. Alternatively, the following coordinate-wise affine<br />
search scheme iterates without line search by a change <strong>of</strong> variables.<br />
Coordinate-wise optimization (CW Opt) algorithm without line search:<br />
For β > 0, if the<br />
problem in (6.6)-(6.9) has the solution R 2 reach (V poly, S, β), V, s 1 , and s 2 , then<br />
1<br />
R 2 reach (V poly,S,β) =<br />
min<br />
1<br />
subject to (6.37)<br />
K∈V poly ,1/R 2 >0,˜s 1 ∈S 1 ,˜s 2 ∈S 2 R2 ˜s 1 ∈ Σ[x], ˜s 2 ∈ Σ[(x, w)], and K ∈ R[x] (6.38)<br />
K − l 1 /R 2 ∈ Σ[x], (6.39)<br />
(β − p) − (1 − K)˜s 1 ∈ Σ[x], (6.40)<br />
− [ (1 − K)s 2 + ∇Kf(x, w) − 1 R 2 w T w ] ∈ Σ[(x, w)]. (6.41)<br />
In fact, the optimal values <strong>of</strong> ˜s 1 , ˜s 2 , and K are equal to Rreach 2 s 1, s 2 , and V/Rreach 2 respectively.<br />
This correspondence enables to implement a coordinate-wise affine search algorithm<br />
that does not require line search on R 2 once a feasible solution for (6.6)-(6.9) is obtained.<br />
Indeed, let V current and Rcurrent 2 be such that (6.6)-(6.9) are feasible. Then, a coordinatewise<br />
affine search scheme initialized with K current (x) = V current (x)/Rcurrent 2 for computing<br />
suboptimal solutions for (6.5) is composed <strong>of</strong> the following optimization problems solved<br />
alternatingly.<br />
113
Given K current , solve:<br />
[1/R∗, 2 ˜s ∗ 1 , ˜s∗ 2 ] = argmin 1<br />
1/R 2 >0,˜s 1 ∈S 1 ,˜s 2 ∈S 2<br />
subject to<br />
R2 ˜s 1 ∈ Σ[x], ˜s 2 ∈ Σ[(x, w)],<br />
(β − p) − (1 − K current )˜s 1 ∈ Σ[x],<br />
(6.42)<br />
− [ (1 − K current )˜s ∗ 2 + ∇K currentf(x, w) − 1 R 2 w T w ] ∈ Σ[(x, w)].<br />
Given ˜s ∗ 1 and ˜s∗ 2 , solve:<br />
[1/R∗, 2 1<br />
K current ] = argmin 1/R 2 >0,K∈V poly<br />
subject to<br />
R2 K ∈ R[x],<br />
K − l 1 /R 2 ∈ Σ[x],<br />
(β − p) − (1 − K)˜s ∗ 1 ∈ Σ[x],<br />
(6.43)<br />
− [ (1 − K)˜s ∗ 2 + ∇Kf(x, w) − 1 w T w ] ∈ Σ[(x, w)].<br />
R 2<br />
Remarks 6.3.1. A coordinate-wise affine search scheme for the bilinear SOS problems<br />
in L 2 → L 2 analysis is as follows: Let V current and Rcurrent 2 be feasible for (6.13)-(6.16)<br />
and define K current = V current /Rcurrent. 2 Then, a coordinate-wise affine search scheme for<br />
computing suboptimal solutions for (6.13)-(6.16) is composed <strong>of</strong> the following optimization<br />
problems solved alternatingly.<br />
Given K current , solve:<br />
[1/R 2 ∗, ˜s ∗ 1 ] = argmin 1/R 2 >0,˜s 1 ∈S 1<br />
1<br />
R 2<br />
subject to<br />
˜s 1 ∈ Σ[(x, w)]<br />
− [ (1 − K current )˜s 1 + ∇Kf(x, w) − 1 R 2 w T w + 1 R 2 γ −2 z T z ] ∈ Σ[(x, w)]<br />
Given ˜s ∗ 1 , solve: [1/R 2 ∗, K current ] = argmin 1/R 2 >0,K∈V poly<br />
1<br />
R 2<br />
subject to<br />
K ∈ Σ[x]<br />
− [ (1 − K)˜s ∗ 1 + ∇Kf(x, w) − 1 R 2 w T w + 1 R 2 γ −2 z T z ] ∈ Σ[(x, w)].<br />
114
20<br />
15<br />
β<br />
10<br />
5<br />
0<br />
0 5 10 15 20 25 30<br />
R 2<br />
Figure 6.1. Bounds on reachable sets due to disturbance w with ‖w‖ 2 2 ≤ R2 for Example<br />
1 without delay (blue curve with dots: before refinement, green curve with ×: after<br />
refinement, red curve with ⋄: lower bound).<br />
⊳<br />
6.4 Examples<br />
Example 1<br />
(Reachability analysis for a system from the literature) Consider the nonlinear<br />
system from [37, 60]<br />
ẋ 1 = −x 1 + x 2 − x 1 x 2 2 , ẋ 2 = u − x 2 1 x 2 + w<br />
y = x 2 , u = −y.<br />
Let the shape factor be p(x) = x 2 1 +x2 2 . For l 1(x) = 10 −6 x T x, ∂(V ) = 2, ∂(s 1 ) = 0, ∂(s 2 ) = 2<br />
(with no constant and linear terms), N SIM = 100, N V<br />
= 1, ε iter = 0.01, N iter = 20, and<br />
m = 10, we applied SimLFG and CWOpt algorithms and the refinement procedure for<br />
different values <strong>of</strong> β. Figure 6.1 shows the upper bounds for β versus R 2 before and after<br />
applying the refinement procedure and the lower bounds.<br />
115
20<br />
15<br />
β<br />
10<br />
5<br />
0<br />
0 5 10 15 20<br />
R 2<br />
Figure 6.2. Bounds on reachable sets due to disturbance w with ‖w‖ 2 2 ≤ R2 for Example 2<br />
(blue curve with dots: before refinement, green curve with ×: after refinement, red curve<br />
with ⋄: lower bound, and black circles: failed PENBMI runs).<br />
Next, we introduced t d = 0.4 units <strong>of</strong> time delay at the input and modeled the delay<br />
using a (balanced) first order Pade approximation<br />
ẋ 3 (t) = −2/t d x 3 (t) + 2/ √ t d y(t)<br />
and replaced the input by<br />
u(t) = 2/ √ t d x 3 (t) + y(t),<br />
where x 3 is the state <strong>of</strong> delay dynamics, and used p(x) = x 2 1 + x2 2 + 0.01x2 3 . Figure 6.2<br />
shows the upper and lower bounds for β versus R 2 before and after applying the refinement<br />
procedure. We also solved the problem in (6.5)-(6.9) for fixed values <strong>of</strong> β using PENBMI.<br />
When these runs return a feasible solution, the returned value <strong>of</strong> R 2 is usually equal or close<br />
to the best known value. But, PENBMI runs <strong>of</strong>ten terminate with numerical problems or<br />
infeasible results (although a solution is known to exist using the procedure proposed here).<br />
For these values <strong>of</strong> β, we put black circles around corresponding dots on the blue solid<br />
curves in Figure 6.2.<br />
116
1.2<br />
1<br />
0.8<br />
β<br />
0.6<br />
0.4<br />
0.2<br />
0<br />
0 0.05 0.1 0.15 0.2<br />
R 2<br />
Figure 6.3. Bounds on reachable sets due to disturbance w with ‖w‖ 2 2 ≤ R2 for Example 2<br />
(blue curve with dots: before refinement, green curve with ×: after refinement, red curve<br />
with ⋄: lower bound, and black circles: failed PENBMI runs).<br />
Example 2<br />
(Reachability analysis for pendubot dynamics)<br />
Consider the pendubot dynamics form chapter 3 with a disturbance signal inserted at<br />
the input channel<br />
ẋ 1 = x 2<br />
ẋ 2 = 45w + 782x 1 + 135x 2 + 689x 3 + 90x 4<br />
ẋ 3 = x 4<br />
ẋ 4 = 279x 1 x 2 3 + 273x3 3 − 85w − 1425x 1 − 257x 2 − 1249x 3 − 171x 4 .<br />
Here, x 1 and x 3 are angular positions <strong>of</strong> the first link and the second link (relative to the<br />
first link) and w is the scalar disturbance. Figure 6.3 shows the upper (with the parameters<br />
used in Example 1 except for N SIM = 600) and lower bounds for β versus R 2 before and<br />
after applying the refinement procedure. We put black circles around corresponding dots<br />
on the blue solid curves in Figure 6.3 for failed PENBMI runs.<br />
117
5<br />
4<br />
γ<br />
3<br />
2<br />
1<br />
0 1 2 3 4 5 6<br />
R<br />
Figure 6.4. Upper bounds for ∂(V ) = 2 (with ⋄) and ∂(V ) = 4 (with ×) before the<br />
refinement (blue curves) and after the refinement (green curves) along with the lower bounds<br />
(red curve).<br />
Example 3<br />
(L 2 → L 2 gain analysis for a system with adaptive controller) Consider the<br />
system with an adaptive controller<br />
ẋ 1 = −x 1 + w + r + x 1 z x + z r r<br />
ẋ m = −x m + r<br />
ż x = −x 2 1 + x 1x m<br />
ż r = −x 1 r + x m r,<br />
where x 1 is the plant state, x m is the state <strong>of</strong> the reference model, z x and z m are adaptation<br />
gains, r is the reference input, and w is disturbance. We applied the L 2 → L 2 gain analysis<br />
with the output −1.8x 1 + w + r + x 1 z x + z r r. Figure 6.4 shows lower and upper (before and<br />
after refinement) bounds for ∂(V ) = 2 and ∂(V ) = 4.<br />
118
z<br />
✲<br />
∆<br />
w<br />
M<br />
✛<br />
Figure 6.5. Feedback interconnection <strong>of</strong> ∆ and M.<br />
6.5 Region-<strong>of</strong>-Attraction <strong>Analysis</strong> for <strong>Systems</strong> with Unmodeled<br />
Dynamics using a <strong>Local</strong> Small-Gain Theorem<br />
We prove a local small-gain type theorem and use it to analyze the effect <strong>of</strong> unmodeled<br />
dynamics, satisfying certain gain inequalities, on the regions-<strong>of</strong>-attraction. Consider the<br />
system interconnection in Figure 6.5. Let<br />
ẋ 1 (t) = f 1 (x 1 (t), w(t))<br />
z(t) = h 1 (x 1 (t))<br />
(6.44)<br />
be a realization <strong>of</strong> M, where x 1 ∈ R n 1<br />
, f 1 and h 1 are vectors <strong>of</strong> polynomials satisfying<br />
f 1 (0, 0) = 0 and h 1 (0) = 0.<br />
Proposition 6.5.1. Consider the system interconnection in Figure 6.5 with ∆ a stable<br />
linear time-invariant system satisfying ‖∆‖ ∞ < 1. Let 0 < γ < 1, R > 0, and p be a<br />
positive definite function with p(0) = 0. If there exists a continuously differentiable positive<br />
definite function V satisfying V (0) = 0 and<br />
∇V f 1 (x 1 , w) ≤ w T w − 1 γ 2 zT z − p(x 1 ) for all w ∈ R nw and x 1 ∈ Ω V,R 2, (6.45)<br />
then, for all x 1 (0) ∈ { x 1 ∈ R n 1<br />
: V (x 1 ) ≤ R 2} and ∆ starting from rest, x 1 (t) ∈ Ω V,R 2<br />
and x 1 (t) → 0 as t → ∞.<br />
⊳<br />
119
Pro<strong>of</strong>. Let<br />
ẋ 2 (t) = Ax 2 (t) + Bz(t)<br />
w(t) = Cx 2 (t) + Dz(t)<br />
be a realization <strong>of</strong> ∆ with x 2 ∈ R n 2<br />
. By Kalman-Yakubovich-Popov Lemma [25], there exist<br />
P ≻ 0 and ɛ > 0 such that<br />
⎡<br />
⎢<br />
⎣<br />
A T P + P A + C T C<br />
B T P + D T C<br />
P B + C T D<br />
−I + D T D<br />
⎤<br />
⎥<br />
⎦ + ɛI ≼ 0 (6.46)<br />
Let Q(x 2 ) = x T 2 P x 2, then, for all x 2 ∈ R n 2<br />
and z ∈ R nz , (6.46) implies that<br />
d<br />
dt Q(x 2) = ∂Q(x 2)<br />
∂x 2<br />
(Ax 2 + Bz) = −ɛx T 2 x 2 − ɛz T z − w T w + z T z<br />
≤ z T z − w T w − ɛx T 2 x 2.<br />
(6.47)<br />
Now, let (x 1 , x 2 ) ∈ Ω V +Q,R 2. By (6.46) and (6.47),<br />
d<br />
dt (V (x 1) + Q(x 2 ))<br />
Integrate (6.48) from 0 to T ≥ 0 to get<br />
≤<br />
(<br />
1 − 1<br />
γ 2 )<br />
z T z − p(x 1 ) − ɛx T 2 x 2<br />
≤ −p(x 1 ) − ɛx T 2 x 2.<br />
(6.48)<br />
V (x 1 (T )) ≤ V (x 1 (T )) + Q(x 2 (T )) ≤ −<br />
∫ T<br />
0<br />
(p(x 1 ) + ɛx T 2 x 2 )dt + V (x 1 (0)) ≤ V (x 1 (0)) ≤ R 2 ,<br />
which implies that, for all t ≥ 0, x 1 (t) ∈ Ω V,R 2 whenever x 1 (0) ∈ Ω V,R 2 and x 2 (0) =<br />
0. Convergence <strong>of</strong> (x 1 , x 2 ) and, in particular <strong>of</strong> x 1 , follows from Lemma 3.1.1 using the<br />
inequality in (6.48).<br />
<br />
Remarks 6.5.1.<br />
Note that Proposition 6.5.1 only states that Ω V,R 2 is invariant but does not assure the invariance<br />
<strong>of</strong> the sublevel sets Ω V,ξ for all ξ ≤ R 2 . Hence, V may increase along the trajectories <strong>of</strong><br />
the closed-loop system but, if V (x 1 (0)) ≤ R 2 and ∆ starts from rest, then V cannot exceed<br />
R 2 and converges to the origin along every trajectory with x 1 (0) ∈ Ω V,R 2<br />
and x 2 (0) = 0. ⊳<br />
120
In the pro<strong>of</strong> Proposition 6.5.1, the linear time-invariance property <strong>of</strong> ∆ is only used to<br />
establish the inequality (6.47). Therefore, if ∆ is any dynamical system satisfying (6.47),<br />
then the result <strong>of</strong> Proposition 6.5.1 still applies.<br />
Proposition 6.5.2. Consider the system interconnection in Figure 6.5. Let 0 < γ <<br />
1, R > 0, and p and q be positive definite functions. Let<br />
ẋ 2 (t) = f 2 (x 2 (t), z(t))<br />
w(t) = h 2 (x 2 (t))<br />
(6.49)<br />
be a realization <strong>of</strong> ∆ with x 2 ∈ R n 2<br />
such that there exists a continuously differentiable<br />
positive definite function Q satisfying Q(0) = 0 and<br />
∇Qf 2 (x 2 , z) ≤ z T z − w T w − q(x 2 ) for all z ∈ R nz and x 2 ∈ R n 2<br />
. (6.50)<br />
If there exists a continuously differentiable positive definite function V satisfying V (0) = 0<br />
satisfying (6.45), then, for x 2 (0) = 0 and all x 1 (0) ∈ Ω V,R 2, (x 1 (t), x 2 (t)) ∈ Ω V +Q,R 2<br />
for<br />
all t ≥ 0 and lim t→∞ (x 1 (t), x 2 (t)) = (0, 0), in particular, x 1 (t) ∈ Ω V,R 2 for all t ≥ 0 and<br />
lim t→∞ x 1 (t) = 0. ⊳<br />
Conditions in Proposition 6.5.2 can be relaxed to obtain the following result.<br />
Proposition 6.5.3. Consider the system interconnection in Figure 6.5. Let 0 < γ < 1, R ><br />
˜R > 0, and p be a positive definite function. Let (6.49) be a realization <strong>of</strong> ∆ such that there<br />
exists a continuously differentiable positive definite function Q satisfying Q(0) = 0 and<br />
∇Qf 2 (x 2 , z) ≤ z T z − w T w for all z ∈ R nz and x 2 ∈ R n 2<br />
. (6.51)<br />
If there exists a continuously differentiable positive definite function V satisfying V (0) = 0<br />
and (6.45), then, for x 2 (0) = 0 and all x 1 (0) ∈ Ω V, ˜R2, (x 1 (t), x 2 (t)) ∈ Ω V +Q, ˜R2<br />
for all t ≥ 0<br />
and, in particular, x 1 (t) ∈ Ω V, ˜R2<br />
for all t ≥ 0. Moreover, lim t→∞ x 1 (t) = 0. ⊳<br />
121
Pro<strong>of</strong>. Define S := V + Q.<br />
d<br />
dt S(x 1, x 2 ) = d dt (V (x 1) + Q(x 2 )) ≤ −p(x 1 ) ∀x 1 ∈ Ω V,R 2, ∀x 2 ∈ R n 2<br />
.<br />
Let x 1 (0) ∈ Ω V, ˜R2<br />
and x 2 (0) = 0, and integrate this last equation to get<br />
S(x 1 (t), x 2 (t)) − S(x 1 (0), x 2 (0)) ≤ V (x 1 (t)) + Q(x 2 (t)) − V (x 1 (0)) ≤ −<br />
∫ t<br />
0<br />
p(x 1 (τ))τ,<br />
and x 1 (t) ∈ Ω V, ˜R2<br />
and (x 1 (t), x 3 (t)) ∈ Ω S, ˜R2<br />
follow. With x 1 (0) ∈ Ω V, ˜R2<br />
and x 2 (0) = 0, S is<br />
monotonically non-increasing and bounded below (by zero). Therefore, lim t→∞<br />
∫ t<br />
0<br />
Ṡ(τ)dτ<br />
exists and is finite.<br />
Since S is bounded and positive definite, (x 1 (t), x 2 (t)) is uniformly<br />
bounded for t ≥ 0 and (ẋ 1 (t), ẋ 2 (t)) = (f 1 (x 1 (t), x 2 (t)), f 2 (x 1 (t), x 2 (t))) is uniformly<br />
bounded for all t ≥ 0. Hence x 1 (t), x 2 (t) is uniformly continuous on [0, ∞). S is uniformly<br />
continuous in t on [0, ∞) because S(x 1 , x 2 ) is uniformly continuous in (x 1 , x 2 ) on<br />
the compact set Ω S, ˜R2. Therefore, by Barbalat’s Lemma [38], Ṡ(t) → 0 as t goes to zero.<br />
Consequently, p(x 1 (t)) → 0 and x 1 (t) → 0 as t → ∞.<br />
<br />
Remarks 6.5.2. Conditions in (6.47), (6.51), and (6.51) are supposed to hold globally in<br />
x 2 . On the other hand, if these conditions are imposed locally, i.e., if they are imposed on<br />
sublevel sets <strong>of</strong> Q, then results similar to Propositions 6.5.1-6.5.3 can still be derived.<br />
⊳<br />
6.5.1 Controlled Short Period Aircraft Dynamics<br />
We now apply Theorem 6.5.2 to the robust ROA analysis for the aircraft model used<br />
in previous chapters (see section 4.5.2). To this end, let 1 > γ > 0 µ > 0. Let ẋ = f(x, w)<br />
and z = h(x) be the realization <strong>of</strong> the system from the input w to the output z shown in<br />
Figure 6.6 (where the plant dynamics and the controller dynamics are as in section 4). By<br />
Proposition 6.5.1, for linear time-invariant δ with ‖δ‖ ∞ < 1 and starting from rest, if there<br />
122
exists positive definite V such that V (0) = 0 and<br />
∇V f(x, w) ≤ w T w − 1 γ 2 zT z − µV (x) for all w ∈ R nw and x ∈ Ω V,R 2,<br />
then Ω V,R 2 is invariant and all trajectories <strong>of</strong> the closed-loop system with x 1 (0) ∈ Ω V,R 2<br />
converge to the origin. In order to enlarge computed Ω V,R 2 by the choice <strong>of</strong> V, we introduce a<br />
fixed, positive definite, convex polynomial p (as before), impose the constraint Ω p,β ⊆ Ω V,R 2,<br />
and maximize β. This can be written as the following optimization.<br />
max<br />
V ∈V,β>0,R 2 >0<br />
β<br />
subject to<br />
V (x) > 0 for all nonzero x, V (0) = 0,<br />
∇V f(x, w) ≤ w T w − 1<br />
γ 2 z T z − µV (x) ∀ x ∈ Ω V,R 2,<br />
Ω p,β ⊆ Ω V,R 2.<br />
∀ w ∈ R<br />
(6.52)<br />
Using the generalized S-procedure and SOS relaxations, a lower bound for the optimal value<br />
<strong>of</strong> the optimization (6.52) can be formulated as a bilinear SOS optimization problem. We<br />
computed suboptimal solutions to this problem for two cases with p(x) = x T x, γ = 0.99,<br />
and µ = 10 −6 using a variant <strong>of</strong> coordinate-wise affine search scheme:<br />
• (no parametric uncertainty in the plant δ 1 = 1.52 and δ 2 = 0) The computed values<br />
<strong>of</strong> β are 4.24 and 6.67 for ∂(V ) = 2 and ∂(V ) = 4, respectively.<br />
• (with parametric uncertainty in the plant δ 1 ∈ [0.99, 2.05] and δ 2 ∈ [−0.1, 0.1]) We<br />
implemented a branch-and-bound type refinement procedure coupled with a sequential<br />
suboptimal solution technique (similar to that in section 4.3). The computed values<br />
<strong>of</strong> β are 2.39 and 4.14 for ∂(V ) = 2 and ∂(V ) = 4, respectively.<br />
123
✲<br />
ẋ 4 = A c x 4 + B c y<br />
v = C c x 4<br />
z✲<br />
0.75<br />
✲ 1.25<br />
∆<br />
w<br />
✲• ❄u<br />
ẋ<br />
✲ p = f p (x p , δ p ) + B(x p , δ p )u y<br />
+<br />
y = [x 1 x 3 ] T<br />
Figure 6.6. Controlled short period aircraft dynamics with unmodeled dynamics (δ p :=<br />
(δ 1 , δ 2 )).<br />
6.5.2 Summary <strong>of</strong> the Results for the Controlled Short Period Aircraft<br />
Dynamics Example<br />
We have used the controlled short period pitch axis model <strong>of</strong> an aircraft to demonstrate<br />
the methods proposed in chapters 3-6 (section 4.5.2, section 4.5.2, sections 5.4.2-5.4.3, and<br />
section 6.5.1). Namely, we examined the following cases:<br />
i. nominal dynamics (section 4.5.2),<br />
ii. dynamics with parametric uncertainty in the plant (section 5.4.2),<br />
iii. nominal plant dynamics with uncertain first order linear time-invariant dynamics (section<br />
5.4.3),<br />
iv. dynamics with parametric uncertainty in the plant and uncertain first order linear<br />
time-invariant dynamics (section 5.4.3),<br />
v. nominal plant dynamics with unmodeled dynamics satisfying (6.51) (section 6.5.1).<br />
Computed (sub)optimal values <strong>of</strong> β with p(x) = x T x, where x is the state <strong>of</strong> the closed-loop<br />
dynamics, are shown in Table 6.1..<br />
124
Table 6.1. Computed (sub)optimal values <strong>of</strong> β with p(x) = x T x (with ∂(V ) = 2 / ∂(V ) = 4)<br />
.<br />
without plant uncertainty<br />
with plant uncertainty<br />
no unmodeled dynamics 9.38 / 16.11 5.45 / 7.93<br />
with multiplicative uncertainty<br />
δ 3<br />
s−δ 4<br />
s+δ 4<br />
4.90 / – 2.80 / –<br />
with unmodeled dynamics satisfying<br />
(6.51)<br />
4.24 / 6.67 2.39 / 4.14<br />
6.6 Chapter Summary<br />
We analyzed reachability properties and local input/output gains <strong>of</strong> systems with polynomial<br />
vector fields. Upper bounds for the reachable set and nonlinear system gains were<br />
characterized using Lyapunov/storage functions and computed solving bilinear sum-<strong>of</strong>squares<br />
programming problems. A procedure to refine the upper bounds by transforming<br />
polynomial Lyapunov/storage functions to non-polynomial Lyapunov functions was developed.<br />
The simulation-aided analysis methodology was adapted to reachability analysis. Finally,<br />
a local small-gain theorem was proposed and applied to the robust region-<strong>of</strong>-attraction<br />
analysis for systems with unmodeled dynamics.<br />
125
Chapter 7<br />
Conclusions<br />
This thesis investigated quantitative methods for local robustness and performance<br />
analysis <strong>of</strong> nonlinear dynamical systems with polynomial vector fields. We proposed measures<br />
to quantify systems’ robustness against uncertainties in initial conditions (regions-<strong>of</strong>attraction)<br />
and external disturbances (local reachability/gain analysis). S-procedure and<br />
sum-<strong>of</strong>-squares relaxations were used to translate Lyapunov-type characterizations to sum<strong>of</strong>-squares<br />
optimization problems. These problems are typically bilinear/nonconvex (due to<br />
local analysis rather than global) and their size grows rapidly with state/uncertainty space<br />
dimension.<br />
Our approach was based on exploiting system theoretic interpretations <strong>of</strong> these optimization<br />
problems to reduce their complexity. We proposed a methodology incorporating<br />
simulation data in formal pro<strong>of</strong> construction enabling more reliable and efficient search<br />
for robustness and performance certificates compared to the direct use <strong>of</strong> general purpose<br />
solvers. This technique was adapted both to region-<strong>of</strong>-attraction and reachability analysis.<br />
We extended the analysis to uncertain systems by taking an intentionally simplistic<br />
126
and potentially conservative route, namely employing parameter-independent, rather than<br />
parameter-dependent, certificates. The conservatism was simply reduced by a branch-andbound<br />
type refinement procedure. The main thrust <strong>of</strong> these methods is their suitability<br />
for parallel computing achieved by decomposing otherwise challenging problems into relatively<br />
tractable smaller ones. We demonstrated proposed methods on several small/medium<br />
size examples in each chapter and applied each method to a benchmark example with an<br />
uncertain short period pitch axis model <strong>of</strong> an aircraft.<br />
Additional practical issues leading to a more rigorous basis for the proposed methodology<br />
as well as promising further research topics were also addressed. We showed that<br />
stability <strong>of</strong> linearized dynamics is not only necessary but also sufficient for the feasibility<br />
<strong>of</strong> the formulations in region-<strong>of</strong>-attraction analysis. Furthermore, we generalized an upper<br />
bound refinement procedure in local reachability/gain analysis which effectively generates<br />
non-polynomial certificates from polynomial ones.<br />
As an initial step toward developing<br />
optimization-based hierarchical algorithms for nonlinear system analysis, we proposed a local<br />
small-gain theorem and applied to stability region analysis in the presence <strong>of</strong> unmodeled<br />
dynamics.<br />
Summaries <strong>of</strong> the material presented in each chapter were provided at the end <strong>of</strong> respective<br />
chapters. Here, we discuss several promising future research directions.<br />
• In section 6.5, we proposed a local small-gain type theorem which enabled us to perform<br />
robust region-<strong>of</strong>-attraction analysis in the presence <strong>of</strong> unmodeled dynamics. This<br />
should be regarded as an initial step toward an analysis strategy where complicated<br />
systems are modeled as an interconnection <strong>of</strong> smaller nonlinear systems and the inputoutput<br />
relations <strong>of</strong> these smaller systems are examined to draw conclusions about the<br />
127
ehavior <strong>of</strong> the actual system. The integral quadratic constraint (IQC) methodology<br />
coupled with local analysis tools proposed in this thesis may provide a feasible path<br />
toward developing algorithms with (relatively) better scalability properties.<br />
• As emphasized at several places throughout this thesis, the main thrust <strong>of</strong> the proposed<br />
techniques is the suitability <strong>of</strong> most <strong>of</strong> the computation to trivial parallelization.<br />
However, we have not examined the possibility <strong>of</strong> parallelization <strong>of</strong> the solution <strong>of</strong> the<br />
semidefinite programs that arise. In fact, the size <strong>of</strong> a single semidefinite program<br />
that has to be solved on a single processor sets the limits for the applicability <strong>of</strong> the<br />
proposed algorithms. Therefore, development <strong>of</strong> parallel semidefinite programming<br />
solvers will certainly find extensive use in optimization-based system analysis and<br />
synthesis.<br />
• Although parallel computing is one <strong>of</strong> the powerful tools to attack challenging problems<br />
in many fields <strong>of</strong> engineering and sciences, it has not found a major role in<br />
controls. We demonstrated its effectiveness in a few problems and we believe that<br />
many classical problems in and possibly new interesting problems for controls fall in<br />
the scope <strong>of</strong> parallel computing.<br />
• Finally, understanding the trade-<strong>of</strong>f between system/model complexity, available computational<br />
resources, and strength <strong>of</strong> the pro<strong>of</strong>s in the context <strong>of</strong> nonlinear system<br />
analysis may lead to methodologies with better scalability properties.<br />
128
Bibliography<br />
[1] https://computing.llnl.gov/tutorials/parallel comp/.<br />
[2] http://control.ee.ethz.ch/~joloef/wiki/pmwiki.phpn=Solvers.BMIBNB.<br />
[3] http://www-user.tu-chemnitz.de/~helmberg/semidef.html.<br />
[4] F. Amato, F. Gar<strong>of</strong>alo, and L. Glielmo. Polytopic coverings and robust stability analysis<br />
via Lyapunov quadratic forms. In Variable Structure and Lyapunov Control, pages<br />
269–288. Springer-Verlag, 1994.<br />
[5] V. Balakrishnan, S. Boyd, and S. Balemi. Branch and bound algorithm for computing<br />
the minimum stability degree <strong>of</strong> parameter-dependent linear systems. Int. J. <strong>of</strong> Robust<br />
and <strong>Nonlinear</strong> Control, 1(4):295–317, 1991.<br />
[6] G. Balas, J. Doyle, K. Glover, A. Packard, and R. Smith. µ-<strong>Analysis</strong> and Synthesis<br />
Toolbox - User’s Guide, 1998.<br />
[7] B.R. Barmish. Necessary and sufficient conditions for quadratic stabilizability <strong>of</strong> an<br />
uncertain system. Journal <strong>of</strong> Optimization Theory and Applications, 46(4):399–408,<br />
1985.<br />
129
[8] H. H. Bauschke and J. M. Borwein. On projection algorithms for solving convex<br />
feasibility problems. SIAM Review, 38(3):367–426, 1996.<br />
[9] A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization: <strong>Analysis</strong>,<br />
Algorithms, and Engineering Applications. MPS-SIAM Series on Optimization, 2001.<br />
[10] S. Benson and Y. Ye. DSDP: Dual-scaling algorithm for semidefinite programming.<br />
Technical Report ANL/MCS-P851-1000, Argonne National Laboratory, 2001. Available<br />
at http://www-unix.mcs.anl.gov/DSDP/.<br />
[11] E. Beran, L. Vandenberghe, and S. Boyd. A global BMI algorithm based on the<br />
generalized Benders decomposition. 1997. Paper number 934.<br />
[12] J. Bochnak, M. Coste, and M.-F. Roy. Real Algebraic Geometry. Springer-Verlag, 1998.<br />
[13] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in<br />
<strong>Systems</strong> and Control Theory. SIAM, Philadelphia, 1994.<br />
[14] S. Boyd and L. Vandenberghe. Semidefinite programming relaxations <strong>of</strong> non-convex<br />
problems in control and combinatorial optimization. In A. Paulraj, V. Roychowdhuri,<br />
and C. Schaper, editors, Communications, Computation, Control and Signal Processing:<br />
A Tribute To Thomas Kailath, pages 279–288. Kluwer, 1997.<br />
[15] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge Univ. Press, 2004.<br />
[16] A. E. Bryson and Y.-C. Ho. John Wiley & Sons Inc., 1979.<br />
[17] F. Camilli, L. Grune, and F. Wirth. A generalization <strong>of</strong> Zubov’s method to perturbed<br />
systems. SIAM Journal on Control and Optimization, 40(2):496–515, 2001.<br />
130
[18] G. Chesi. Estimating the domain <strong>of</strong> attraction <strong>of</strong> uncertain polynomial systems. Automatica,<br />
40:1981–1986, 2004.<br />
[19] G. Chesi. On the estimation <strong>of</strong> the domain <strong>of</strong> attraction for uncertain polynomial<br />
systems via lmis. In Proc. Conf. on Decision and Control, pages 881–886, Bahamas,<br />
2004.<br />
[20] G. Chesi, A. Garulli, A. Tesi, and A. Vicino. LMI-based computation <strong>of</strong> optimal<br />
quadratic Lyapunov functions for odd polynomial systems. Int. J. Robust <strong>Nonlinear</strong><br />
Control, 15:35–49, 2005.<br />
[21] H.-D. Chiang and J. S. Thorp. Stability regions <strong>of</strong> nonlinear dynamical systems: A<br />
constructive methodology. IEEE Transaction on Automatic Control, 34(12):1229–1241,<br />
1989.<br />
[22] M. Choi, T. Lam, and B. Reznick. Sums <strong>of</strong> squares <strong>of</strong> real polynomials. In Proceedings<br />
<strong>of</strong> Symposia in Pure Mathematics, volume 58(2), pages 103–126, 1995.<br />
[23] D. F. Coutinho, M. Fu, A. Tr<strong>of</strong>ino, and P. Danes. L 2 -gain analysis and control <strong>of</strong><br />
uncertain nonlinear systems with bounded disturbance inputs. International Journal<br />
<strong>of</strong> Robust and <strong>Nonlinear</strong> Control, 18:88–110, 2007.<br />
[24] E. J. Davison and E. M. Kurak. A computational method for determining quadratic<br />
Lyapunov functions for nonlinear systems. Automatica, 7:627–636, 1971.<br />
[25] G. E. Dullerud and F. Paganini. Springer, 2000.<br />
[26] E. Feron. Nonconvex quadratic programming, semidefinite relaxations and randomization<br />
algorithms in information and decision systems. In T. Djaferis and I. Schick,<br />
131
editors, System Theory: Modeling, <strong>Analysis</strong> and Control, pages 255–274. Kluwer Academic<br />
Publishers, 1999.<br />
[27] R. Genesio, M. Tartaglia, and A. Vicino. On the estimation <strong>of</strong> asymptotic stability<br />
regions: State <strong>of</strong> the art and new proposals. IEEE Transaction on Automatic Control,<br />
30(8):747–755, 1985.<br />
[28] L. El Ghaoui and S. Niculescu, editors. Advances in linear matrix inequality methods<br />
in control: advances in design and control.<br />
Society for Industrial and Applied<br />
Mathematics, Philadelphia, PA, USA, 2000.<br />
[29] P. E. Gill, W. Murray, M. A. Saunders, and M. H. Wright. User’s guide for NPSOL<br />
(version 4.0): a Fortran package for nonlinear programming. Technical Report 86-2,<br />
<strong>Systems</strong> Optimization Laboratory, Stanford <strong>University</strong>, 1986.<br />
[30] K.-C. Goh, M.G. Safonov, and G.P. Papavassilopoulos. A global optimization approach<br />
for the BMI problem. volume 3, pages 2009–2014, 1994.<br />
[31] M. Grant and S. Boyd. CVX: Matlab s<strong>of</strong>tware for disciplined convex programming<br />
(web page and s<strong>of</strong>tware). http://stanford.edu/ boyd/cvx.<br />
[32] O. Hachicho and B. Tibken. Estimating domains <strong>of</strong> attraction <strong>of</strong> a class <strong>of</strong> nonlinear<br />
dynamical systems with LMI methods based on the theory <strong>of</strong> moments. In Proc. Conf.<br />
on Decision and Control, pages 3150–3155, Las Vegas, NV, 2002.<br />
[33] J. Hauser and M.C. Lai. Estimating quadratic stability domains by nonsmooth optimization.<br />
In Proc. American Control Conf., pages 571–576, Chicago, IL, 1992.<br />
[34] D. Henrion, J. L<strong>of</strong>berg, M. Kocvara, and M. Stingl. Solving polynomial static output<br />
feedback problems with PENBMI. pages 7581–7586, 2005.<br />
132
[35] D. Henrion and M. Šebek. Overcoming nonconvexity in polynomial robust control<br />
design.<br />
In Proc. Symp. Math. Theory <strong>of</strong> Networks and <strong>Systems</strong>, Leuven, Belgium,<br />
2006.<br />
[36] I. D. Ivanov and E. de Klerk. Parallel implementation <strong>of</strong> a semidefinite programming<br />
solver based on CSDP in a distributed memory cluster. Discussion Paper 2007-20,<br />
Tilburg <strong>University</strong>, Center for Economic Research, 2007.<br />
[37] Z. Jarvis-Wloszek, R. Feeley, W. Tan, K. Sun, and Andrew Packard. Control applications<br />
<strong>of</strong> sum <strong>of</strong> squares programming. In D. Henrion and A. Garulli, editors, Positive<br />
Polynomials in Control, pages 3–22. Springer-Verlag, 2005.<br />
[38] H. K. Khalil. <strong>Nonlinear</strong> <strong>Systems</strong>. Prentice Hall, 3 rd edition, 2002.<br />
[39] M. Ko˘cvara and M. Stingl. PENBMI User’s Guide (Version 2.0), available from<br />
http://www.penopt.com. 2005.<br />
[40] E.L. Lawler and D.E.Wood. Branch-and-bound methods: a survey. Operations Research,<br />
14(4):679–719, 1966.<br />
[41] J. L<strong>of</strong>berg. Yalmip : A toolbox for modeling and optimization in MATLAB. In<br />
Proceedings <strong>of</strong> the CACSD Conference, Taipei, Taiwan, 2004.<br />
[42] D. G. Luenberger. Linear and <strong>Nonlinear</strong> Programming. Springer, 2003.<br />
[43] M. Mesbahi, M. G. Safonov, and G. P. Papavassilopoulos. Bilinearity and complementarity<br />
in robust control. pages 269–292, 2000.<br />
[44] Y. Nesterov and A. Nemirovskii. Interior Point Polynomial Methods in Convex Programming:<br />
Theory and Applications. Society for Industrial and Applied Mathematics,<br />
Philadelphia, 1994.<br />
133
[45] A. Paice and F. Wirth. Robustness analysis <strong>of</strong> domains <strong>of</strong> attraction <strong>of</strong> nonlinear<br />
systems. In Proc. <strong>of</strong> the Mathematical Theory <strong>of</strong> Networks and <strong>Systems</strong>, pages 353–<br />
356, Padova, Italy, 1998.<br />
[46] A. Papachristodoulou. Scalable analysis <strong>of</strong> nonlinear systems using convex optimization.<br />
Ph.D. dissertation, Caltech, January 2005.<br />
[47] A. Papachristodoulou and S. Prajna. <strong>Analysis</strong> <strong>of</strong> non-polynomial systems using the sum<br />
<strong>of</strong> squares decomposition. In D. Henrion and A. Garulli, editors, Positive Polynomials<br />
in Control, pages 23–43. Springer-Verlag, 2005.<br />
[48] P. Parrilo. Structured semidefinite programs and semialgebraic geometry methods in<br />
robustness and optimization.<br />
Ph.D. dissertation, California Institute <strong>of</strong> Technology,<br />
May 2000.<br />
[49] P. Parrilo. Semidefinite programming relaxations for semialgebraic problems. Mathematical<br />
Programming Series B, 96(2):293–320, 2003.<br />
[50] S. Prajna. Barrier certificates for nonlinear model validation. volume 3, pages 2884–<br />
2889, 2003.<br />
[51] S. Prajna, A. Papachristodoulou, P. Seiler, and P. A. Parrilo. SOSTOOLS: Sum <strong>of</strong><br />
squares optimization toolbox for MATLAB, 2004.<br />
[52] D.V. Prokhorov and L.A. Feldkamp. Application <strong>of</strong> SVM to Lyapunov function approximation.<br />
In Proc. Int. Joint Conf. on Neural Networks, Washington, DC, 1999.<br />
[53] B. Reznick. Some concrete aspects <strong>of</strong> hilberts 17th problem. In Contemporary Mathematics,<br />
volume 253, pages 251–272. American Mathematical Society, 2000.<br />
[54] W. Rudin. Functional <strong>Analysis</strong>. McGraw-Hill Science/, Groningen, 1991.<br />
134
[55] G. Serpen. Search for a Lyapunov function through empirical approximation by artificial<br />
neural nets: Theoretical framework. In Proc. Int. Joint Conf. on Artificial Neural<br />
Networks, pages 735–740, Montreal, Canada, 2005.<br />
[56] J.F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric<br />
cones. Optimization Methods and S<strong>of</strong>tware, 11–12:625–653, 1999. Available at<br />
http://sedumi.mcmaster.ca/.<br />
[57] W. Tan. <strong>Nonlinear</strong> control analysis and synthesis using sum-<strong>of</strong>-squares programming.<br />
Ph.D. dissertation, UC, Berkeley, May 2006.<br />
[58] W. Tan and A. Packard. Stability region analysis using sum <strong>of</strong> squares programming.<br />
In Proc. American Control Conf., pages 2297–2302, Minneapolis, MN, 2006.<br />
[59] W. Tan and A. Packard. Stability region analysis using polynomial and composite<br />
polynomial lyapunov functions and sum-<strong>of</strong>-squares programming. Automatic Control,<br />
IEEE Transactions on, 53(2):565–571, 2008.<br />
[60] W. Tan, A. Packard, and T. Wheeler. <strong>Local</strong> gain analysis <strong>of</strong> nonlinear systems. In<br />
Proc. American Control Conf., pages 92–96, Minneapolis, MN, 2006.<br />
[61] R. Tempo, G. Calafiore, and F. Dabbene. Randomized Algorithms for <strong>Analysis</strong> and<br />
Control <strong>of</strong> Uncertain <strong>Systems</strong>. Springer, 2005.<br />
[62] B. Tibken. Estimation <strong>of</strong> the domain <strong>of</strong> attraction for polynomial systems via LMIs.<br />
In Proc. Conf. on Decision and Control, pages 3860–3864, Sydney, Australia, 2000.<br />
[63] B. Tibken and Y. Fan. Computing the domain <strong>of</strong> attraction for polynomial systems<br />
via BMI optimization methods. In Proc. American Control Conf., pages 117–122,<br />
Minneapolis, MN, 2006.<br />
135
[64] J. E. Tierno, R. M. Murray, J. C. Doyle, and I. M. Gregory. Numerically efficient<br />
robustness analysis <strong>of</strong> trajectory tracking for nonlinear systems.<br />
AIAA Journal <strong>of</strong><br />
Guidance, Control, and Dynamics, 20:640–647, 1997.<br />
[65] K. C. Toh, M. J. Todd, and R. Tutuncu. SDPT3 — a Matlab s<strong>of</strong>tware package<br />
for semidefinite programming. Optimization Methods and S<strong>of</strong>tware, 11:545–581, 1999.<br />
Available at http://www.math.nus.edu.sg/~mattohkc/sdpt3.html.<br />
[66] O. Toker and H. Ozbay. On the NP-hardness <strong>of</strong> solving bilinear matrix inequalities<br />
and simultaneous stabilization with static output feedback. In Proc. American Control<br />
Conf., pages 2525–2526, 1995.<br />
[67] U. Topcu and A. Packard. <strong>Local</strong> stability analysis for uncertain nonlinear systems,<br />
2007. To appear in IEEE Transaction on Automatica Control.<br />
[68] U. Topcu and A. Packard. Stability region analysis for uncertain nonlinear systems.<br />
In Proc. Conf. on Decision and Control, pages 1693–1698, New Orleans, LA, 2007.<br />
[69] U. Topcu and A. Packard. Simulation-aided reachability and local gain analysis for<br />
nonlinear dynamical systems, 2008. Under review for the Conf. on Decision and Control.<br />
[70] U. Topcu, A. Packard, and P. Seiler. <strong>Local</strong> stability analysis using simulations and<br />
sum-<strong>of</strong>-squares programming, 2006. To appear in Automatica, paper number: 6817.<br />
[71] U. Topcu, A. Packard, P. Seiler, and G. Balas. <strong>Local</strong> stability analysis for uncertain<br />
nonlinear systems using a branch-and-bound algorithm. In Proc. American Control<br />
Conf., pages 3428–3433, Seattle, WA, 2008.<br />
136
[72] U. Topcu, A. Packard, P. Seiler, and G. Balas. <strong>Local</strong> stability analysis for uncertain<br />
nonlinear systems using a branch-and-bound type algorithm, 2008. Submitted to IEEE<br />
Transaction on Automatica Control.<br />
[73] U. Topcu, A. Packard, P. Seiler, and T. Wheeler. Stability region analysis using simulations<br />
and sum-<strong>of</strong>-squares programming. In Proc. American Control Conf., pages<br />
6009–6014, New York, NY, 2007.<br />
[74] A. Tr<strong>of</strong>ino. Robust stability and domain <strong>of</strong> attraction <strong>of</strong> uncertain nonlinear systems.<br />
In Proc. American Control Conf., pages 3707–3711, Chicago, IL, 2000.<br />
[75] A. Vannelli and M. Vidyasagar. Maximal Lyapunov functions and domains <strong>of</strong> attraction<br />
for autonomous nonlinear systems. Automatica, 21(1):69–80, 1985.<br />
[76] M. Vidyasagar. <strong>Nonlinear</strong> <strong>Systems</strong> <strong>Analysis</strong>. Prentice Hall, 2 nd edition, 1993.<br />
[77] R. Vinter. A characterization <strong>of</strong> the reachable set for nonlinear control systems. SIAM<br />
Journal on Control and Optimization, 18(6):599–610, 1980.<br />
[78] T-C. Wang, S. Lall, and M. West. Polynomial level-set methods for nonlinear dynamical<br />
systems analysis. In Proc. Allerton Conf. on Communication, Control, and Computing,<br />
Allerton, IL, 2005.<br />
[79] H. Wolkowicz, R. Saigal, and L. Vandenberghe, editors. Handbook <strong>of</strong> Semidefinite<br />
Programming: Theory, Algorithms, and Applications. Kluwer Academic Publishers,<br />
2000.<br />
[80] S. J. Wright. Primal-dual interior-point methods. Society for Industrial and Applied<br />
Mathematics, Philadelphia, PA, USA, 1997.<br />
137