12.07.2015 Views

Lyapunov Based Analysis and Controller Synthesis for Polynomial ...

Lyapunov Based Analysis and Controller Synthesis for Polynomial ...

Lyapunov Based Analysis and Controller Synthesis for Polynomial ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Lyapunov</strong> <strong>Based</strong> <strong>Analysis</strong> <strong>and</strong> <strong>Controller</strong> <strong>Synthesis</strong> <strong>for</strong> <strong>Polynomial</strong>Systems using Sum-of-Squares OptimizationbyZachary William Jarvis-WloszekB.S.E. (Princeton University) 1999M.S. (University of Cali<strong>for</strong>nia, Berkeley) 2001A dissertation submitted in partial satisfaction of therequirements <strong>for</strong> the degree ofDoctor of PhilosophyinEngineering-Mechanical Engineeringin theGRADUATE DIVISIONof theUNIVERSITY OF CALIFORNIA, BERKELEYCommittee in charge:Professor Andrew K. Packard, ChairProfessor J. Karl HedrickProfessor Laurent El GhaouiFall 2003


The dissertation of Zachary William Jarvis-Wloszek is approved:ChairDateDateDateUniversity of Cali<strong>for</strong>nia, BerkeleyFall 2003


<strong>Lyapunov</strong> <strong>Based</strong> <strong>Analysis</strong> <strong>and</strong> <strong>Controller</strong> <strong>Synthesis</strong> <strong>for</strong> <strong>Polynomial</strong>Systems using Sum-of-Squares OptimizationCopyright Fall 2003byZachary William Jarvis-Wloszek


1Abstract<strong>Lyapunov</strong> <strong>Based</strong> <strong>Analysis</strong> <strong>and</strong> <strong>Controller</strong> <strong>Synthesis</strong> <strong>for</strong> <strong>Polynomial</strong> Systems usingSum-of-Squares OptimizationbyZachary William Jarvis-WloszekDoctor of Philosophy in Engineering-Mechanical EngineeringUniversity of Cali<strong>for</strong>nia, BerkeleyProfessor Andrew K. Packard, ChairThis thesis considers a <strong>Lyapunov</strong> based approach to analysis <strong>and</strong> controller synthesis<strong>for</strong> systems whose dynamics are described by polynomials. We restrict the c<strong>and</strong>idate<strong>Lyapunov</strong> functions as well as the controllers to be polynomials, so that the conditions inthe <strong>Lyapunov</strong> theorem involve only polynomials. The Positivstellensatz delineates the exactmanner to ascertain (ie. “certify”) if the theorem’s conditions hold. For computationalreasons we further restrict the choice of certificates to those, which, with fixed <strong>Lyapunov</strong>functions <strong>and</strong> controllers, can be checked using sum-of-squares optimization.Followingthese steps, we pose convex or coordinatewise convex (convex in one variable when theothers are held fixed) iterative algorithms to search <strong>for</strong> <strong>Lyapunov</strong> functions <strong>and</strong> controllers.We provide a basic review of polynomials, the Positivstellensatz <strong>and</strong> the sum-ofsquaresoptimization results, which gives the necessary background to follow the subsequent


2developments that lead to our proposed algorithms. First, we consider global stability byconstructing convex algorithms to search <strong>for</strong> <strong>Lyapunov</strong> functions that demonstrate semiglobalexponential stability. We then extend these algorithms in a coordinatewise convex<strong>for</strong>m <strong>for</strong> both state <strong>and</strong> output feedback controller design.Additionally, we include aconvex procedure to quantify a system’s per<strong>for</strong>mance by bounding the induced norm fromdisturbances to outputs. Examples are included <strong>for</strong> illustration.Since we do not always desire global results, we provide two algorithmic approachesto prove local stability. These approaches are coordinatewise convex <strong>and</strong> estimate the sizeof the system’s region of attraction by finding the largest level set of a <strong>Lyapunov</strong> function onwhich the stability theorem’s conditions hold. An example provides a graphical comparisonof the two approaches. As with the global case, we then extend these algorithms to allow<strong>for</strong> state <strong>and</strong> output feedback controller design.Also, we derive bounds <strong>for</strong> the largestpeak disturbance under which an invariant set remains invariant, <strong>and</strong> bounds <strong>for</strong> the localinduced gain from disturbances to outputs on the set.Additionally, we extend the two local asymptotic stability algorithms to discretetime polynomial systems.Un<strong>for</strong>tunately, the structure of the local asymptotic stability<strong>Lyapunov</strong> theorem in discrete time does not allow <strong>for</strong> controller design using our iterativeapproach.Professor Andrew K. PackardDissertation Committee Chair


To Sarah <strong>for</strong> her support <strong>and</strong> encouragementiii


ivContents1 Introduction 11.1 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Summary of Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 <strong>Polynomial</strong> Background 82.1 Sum-of-Squares <strong>Polynomial</strong>s . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.1.1 Properties of the Set of SOS <strong>Polynomial</strong>s . . . . . . . . . . . . . . . 102.1.2 Computational Aspects of SOS <strong>Polynomial</strong>s . . . . . . . . . . . . . . 112.2 The Positivstellensatz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.2.2 Theorems Related to the Positivstellensatz . . . . . . . . . . . . . . 232.3 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Global <strong>Analysis</strong> <strong>and</strong> <strong>Controller</strong> <strong>Synthesis</strong> 273.1 Stability Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.1.2 Stability Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.2 Convex Stability Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.2.1 Stability Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.3 Disturbance <strong>Analysis</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.4 State Feedback <strong>Controller</strong> Design . . . . . . . . . . . . . . . . . . . . . . . . 393.4.1 Iterative State Feedback Design Algorithms . . . . . . . . . . . . . . 403.4.2 State Feedback Design Example: Non-Holonomic System . . . . . . 453.4.3 State Feedback Design Example: Nonlinear Spring-Mass System . . 463.5 Output Feedback <strong>Controller</strong> Design . . . . . . . . . . . . . . . . . . . . . . . 493.5.1 Iterative Output Feedback Design Algorithms . . . . . . . . . . . . . 513.5.2 Output Feedback Design Example . . . . . . . . . . . . . . . . . . . 553.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59


v4 Local Stability <strong>and</strong> <strong>Controller</strong> <strong>Synthesis</strong> 604.1 Local Stability Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 604.2 Convex Stability Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634.2.1 Exp<strong>and</strong>ing D Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 634.2.2 Exp<strong>and</strong>ing Interior Algorithm . . . . . . . . . . . . . . . . . . . . . . 704.2.3 Estimating the Region of Attraction Example . . . . . . . . . . . . . 744.3 Disturbance <strong>Analysis</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784.3.1 Reachable Set Bounds under Unit Energy Disturbances . . . . . . . 784.3.2 Set Invariance under Peak Bounded Disturbances . . . . . . . . . . . 824.3.3 Induced L 2 → L 2 Gain . . . . . . . . . . . . . . . . . . . . . . . . . . 844.4 State Feedback <strong>Controller</strong> Design . . . . . . . . . . . . . . . . . . . . . . . . 864.4.1 Exp<strong>and</strong>ing D Algorithm <strong>for</strong> State Feedback Design . . . . . . . . . . 864.4.2 Exp<strong>and</strong>ing Interior Algorithm <strong>for</strong> State Feedback Design . . . . . . 914.4.3 State Feedback Design Example . . . . . . . . . . . . . . . . . . . . 954.5 Output Feedback <strong>Controller</strong> Design . . . . . . . . . . . . . . . . . . . . . . . 1004.5.1 Exp<strong>and</strong>ing D Algorithm <strong>for</strong> Output Feedback Design . . . . . . . . 1014.5.2 Exp<strong>and</strong>ing Interior Algorithm <strong>for</strong> Output Feedback Design . . . . . 1074.5.3 Output Feedback Design Example . . . . . . . . . . . . . . . . . . . 1114.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1165 Discrete Time Containment & Stability 1175.1 Set Invariance <strong>for</strong> Discrete Time Systems . . . . . . . . . . . . . . . . . . . 1185.1.1 Set Invariance Example . . . . . . . . . . . . . . . . . . . . . . . . . 1185.1.2 Set Invariance under Disturbances . . . . . . . . . . . . . . . . . . . 1195.1.3 Set Invariance under Disturbances Example . . . . . . . . . . . . . . 1205.2 Discrete Time Stability Background . . . . . . . . . . . . . . . . . . . . . . 1215.3 Convex Stability Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235.3.1 Exp<strong>and</strong>ing D Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 1235.3.2 Exp<strong>and</strong>ing Interior Algorithm . . . . . . . . . . . . . . . . . . . . . . 1265.3.3 Estimating the Region of Attraction Example . . . . . . . . . . . . . 1295.4 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1306 Conclusions <strong>and</strong> Recommendations 132Bibliography 135A Semidefinite Programming 140


viAcknowledgementsI would like to thank everyone at Berkeley who has made my time here interesting <strong>and</strong>exciting. Things would not have been the same without all my friends <strong>and</strong> fellow graduatestudents, especially Ryan White, Derek Caveney <strong>and</strong> the BCCI lab.I would also like to thank Pete Seiler <strong>for</strong> writing the polynomial software classthat I used in almost every line of my code; he has always been a great source of helpfulquestions, comments, <strong>and</strong> one line counter examples.Additionally, I would like to thank my committee <strong>for</strong> taking a genuine interest inmy project. Particular thanks go to Andy Packard <strong>for</strong> giving me the space to go off <strong>and</strong>take my own apporach to exploring these topics.


1Chapter 1IntroductionIn this thesis, we consider the problems of stability analysis <strong>and</strong> controller synthesis<strong>for</strong> nonlinear systems with polynomial dynamics. Our approach uses the primary tool ofnonlinear control, <strong>Lyapunov</strong> functions. The st<strong>and</strong>ard method of analysis consists of seeingif one can pick a c<strong>and</strong>idate <strong>Lyapunov</strong> function such that when it is combined with thesystem, they meet the <strong>Lyapunov</strong> theorem requirements, which are detailed as needed atthe beginnings of Chapters 3-5. The synthesis approach is similar in that it searches <strong>for</strong>a c<strong>and</strong>idate <strong>Lyapunov</strong> function as well as a controller that will make the system fit theassumptions of a <strong>Lyapunov</strong> theorem. Any <strong>Lyapunov</strong> function that admits such a controlleris called a Control <strong>Lyapunov</strong> Function (CLF).Given a CLF, there are many ways to design a controller to meet the <strong>Lyapunov</strong>assumptions <strong>and</strong> many of the important design advances are detailed in [17]. Most of thesetechniques are based on backstepping, which is a control design procedure that uses thecontrol input to sequentially counteract the system’s linear <strong>and</strong> nonlinear dynamics. This


2approach is often quite successful, but it does rely on the system’s dynamics having specialstructure. Additional trans<strong>for</strong>mations can be introduced to widen the types of structure <strong>for</strong>which the backstepping concept will work, however, some structure is still required.We approach the c<strong>and</strong>idate <strong>Lyapunov</strong> function <strong>and</strong> controller search from a differentangle: optimization. Recent advances in convex optimization, [21], when paired withthe important Positivstellensatz from real algebraic geometry, [3], allow us to use convexoptimization to search <strong>for</strong> polynomials that meet certain conditions. If we restrict the <strong>Lyapunov</strong>functions <strong>and</strong> controllers to be polynomials this allows us to <strong>for</strong>mulate the conditionsof the <strong>Lyapunov</strong> theorems as the constraints of a convex optimization problem.The ideas behind the optimization approach to <strong>Lyapunov</strong> analysis <strong>and</strong> synthesisare not new, since they are at the heart of the Linear Matrix Inequality (LMI) approachto linear systems, which is well illustrated in [5]. We use the new optimization results tocreate algorithms that are either convex or stepwise convex to design <strong>Lyapunov</strong> functionsas well as state <strong>and</strong> output feedback controllers. These algorithms extend the early resultsof [20] <strong>and</strong> [13].1.1 Thesis OverviewThis thesis is centered on the questions of proving stability <strong>and</strong> designing stabilizingcontrollers <strong>for</strong> polynomial systems.Our approach is based on classic <strong>Lyapunov</strong>function results that can be turned into computationally tractable optimization problemsusing recent theoretical results. In this section we give an outline of this thesis <strong>and</strong> topicsencountered along the way.


3Chapter 2 provides background material to introduce the important polynomialproperties <strong>and</strong> concepts that are exploited <strong>for</strong> analysis <strong>and</strong> synthesis in the later chapters.First, we introduce the basic polynomial definitions that allow us to define sum-of-squarespolynomials, which allows us to introduce the very important Theorem 2, from [21]. Thistheorem makes the essential link between existence of certain polynomials <strong>and</strong> convex optimization,<strong>and</strong> when coupled with Theorem 4, the Positivstellensatz, provides our motivation<strong>for</strong> working with polynomial systems. After introducing these theoretical results we applythem to a series of examples <strong>and</strong> briefly present a few interesting extensions.Chapter 3 considers the problems of proving global stability <strong>for</strong> polynomial systems.After a review of pertinent elements of <strong>Lyapunov</strong> stability theory, we pose lemmas<strong>for</strong> global asymptotic stability <strong>and</strong> semi-global exponential stability of polynomial systems.Upon investigation of these, we find that we can construct a pair of algorithms to solvethe semi-global exponential stability problem whose feasibility properties are superior tothose of the global asymptotic stability problem. We then extend these stability algorithmsto consider both state <strong>and</strong> output feedback controller design problems <strong>and</strong> provide a testsystem to illustrate their utilization. Additionally, we provide a convex method to bound asystem’s induced L 2 → L 2 gain from disturbance to output.Chapter 4 exp<strong>and</strong>s on the global stability results of the previous chapter to studylocal stability. We provide two complementary algorithms to search <strong>for</strong> <strong>Lyapunov</strong> functionsto demonstrate local asymptotic stability <strong>and</strong> approximate the system’s region of attraction.After demonstrating the aspects of the two algorithmic approaches, we look at techniquesto do local disturbance analysis, including finding the maximum peak disturbance such that


4region remains invariant <strong>and</strong> a local induced L 2 → L 2 gain bound. We then extend thestability algorithms, as in the global case, to both state <strong>and</strong> output feedback controller design.We apply the synthesis algorithms to find simple controllers that stabilize increasinglydifficult versions of the test system from Chapter 3.Chapter 5 looks at applying the polynomial techniques developed in Chapter 2 tothe discrete time polynomial systems. The discrete time <strong>for</strong>mulation allows us to pose simpleinvariant set problems in a straight <strong>for</strong>ward manner without using <strong>Lyapunov</strong> functions. Wethen consider the discrete time <strong>Lyapunov</strong> function framework <strong>and</strong> find applicable versionsof the local stability algorithms from Chapter 4. Un<strong>for</strong>tunately, due to the nature of thediscrete time <strong>Lyapunov</strong> theorem, we can not extend the local stability algorithms to do any<strong>for</strong>m of controller synthesis.Chapter 6 presents conclusions <strong>and</strong> gives recommendations <strong>for</strong> future work.1.2 Thesis ContributionsThis thesis makes several contributions in the areas of stability analysis <strong>and</strong> controllersynthesis <strong>for</strong> polynomial systems. These contributions are discussed below.1. Nonlinear Stability <strong>Analysis</strong>: We provide convex algorithms to construct polynomial<strong>Lyapunov</strong> functions <strong>for</strong> polynomial dynamic systems that prove either globalor local stability. The global algorithms presented in Chapter 3 construct <strong>Lyapunov</strong>functions that make the system semi-globally exponentially stable, while the two approachesto local stability in both Chapter 4 <strong>and</strong> 5 yield stepwise convex algorithmsto prove that the system is locally asymptotically stable.As well as constructing


5<strong>Lyapunov</strong> functions, these algorithms provide estimates of the region of attraction.The local algorithms are applied to both continuous time <strong>and</strong> discrete time polynomialsystems, <strong>and</strong> importantly, if the system’s linearization is stable, then the localalgorithms will always be feasible.2. Nonlinear Disturbance <strong>Analysis</strong>: We investigate the effects of external disturbanceson polynomial systems by constructing a number of convex <strong>Lyapunov</strong> basedalgorithms to bounds their effects. In Chapter 3 we provide a convex bounding procedure<strong>for</strong> the global induced L 2 → L 2 gain from disturbances to system outputs. InChapter 4, we extend this bound so that we can apply it on local invariant regions ofstate space to quantify local system per<strong>for</strong>mance. Additionally, we provide bounds <strong>for</strong>the largest peak value that a disturbance can have so that a given set remains invariant,as well as, bounds <strong>for</strong> a system’s reachable set under unit energy disturbances,which previously appeared in [13]. We construct a non-<strong>Lyapunov</strong> based disturbancepeak bound <strong>for</strong> set invariance of discrete time systems in Chapter 5.3. Nonlinear <strong>Controller</strong> <strong>Synthesis</strong>: We construct stepwise convex algorithms to designstabilizing polynomial controllers <strong>for</strong> polynomial systems <strong>for</strong> both the global <strong>and</strong>the local cases. In Chapter 3, the global synthesis algorithms find both state <strong>and</strong> outputfeedback controllers to make the closed loop system semi-globally exponentiallystable. As in the local stability case, we use two different approaches to design bothstate <strong>and</strong> output feedback locally stabilizing controllers in Chapter 4, as well as estimatetheir domain of attraction. One of the algorithms <strong>for</strong> state feedback controllerpreviously appeared in [13]. Additionally, the local algorithms are feasible as long the


6system’s linearization is controllable, however, the local approaches are not applicableto discrete time controller design.1.3 Summary of ExamplesExamples of the algorithms <strong>and</strong> techniques constructed in this thesis are provided<strong>for</strong> the following topics in the sections indicated. Additionally, all the software necessary torun the examples listed below is provided at http://jagger.berkeley.edu/~zachary.1. Stability <strong>Analysis</strong>• Semi-global Exponential Stability: §3.2.1• Local Asymptotic Stability: §4.2.3• Discrete Time Set Invariance §5.1.1• Discrete Time Local Asymptotic Stability: §5.3.32. Disturbance <strong>Analysis</strong>• Global Induced L 2 → L 2 Gain Bound: §3.4.3 <strong>and</strong> §3.5.2.• Local Induced L 2 → L 2 Gain Bound: §4.4.3• Bounding Maximum Disturbance Peak <strong>for</strong> Set Invariance: §4.4.3• Discrete Time Set Invariance under Disturbances: §5.1.33. State Feedback <strong>Controller</strong> Design• Semi-global Exponential Stabilization: §3.4.2 <strong>and</strong> §3.4.3


7• Local Asymptotic Stabilization: §4.4.34. Output Feedback <strong>Controller</strong> Design• Semi-global Exponential Stabilization: §3.5.2• Local Asymptotic Stabilization: §4.5.3


8Chapter 2<strong>Polynomial</strong> BackgroundThe properties of real polynomials <strong>and</strong> especially one very important subset ofthem <strong>for</strong>m the basis <strong>for</strong> the results of later chapters; this chapter provides the necessarybackground <strong>for</strong> the later results as well as pointers to references <strong>for</strong> more in-depth treatmentsof the well developed fields that these topics touch upon.First, let R denote the real numbers <strong>and</strong> Z + denote the set of nonnegative integers,{0, 1, . . .}. Using this notation we can make the <strong>for</strong>mal definitions that will be used in almostevery result.Definition 1 (Monomial) Every α ∈ Z n + defines a function m α : R n → R, called amonomial. Given a specific α ∈ Z n +, the monomial m α maps x ∈ R n into m α (x) = x α :=x α 11 xα 22 · · · xαn n . The degree of a monomial is defined as deg m α := ∑ ni=1 α i.Definition 2 (<strong>Polynomial</strong>) A polynomial p is defined as a linear combination of a finiteset of monomials {m αj } k j=1 . Given a set of scalar reals, {c j} k j=1 ∈ R, a polynomial p is


9defined as:k∑p := c j m αjj=1or in terms of its action on x ∈ R nk∑k∑p(x) = c j m αj (x) = c j x α jj=1j=1Using the definition of degree <strong>for</strong> a monomial, the degree of p is defined as deg p :=max j (deg m αj ).The set of polynomials with real coefficients <strong>and</strong> common independent variables,say, x 1 , . . . , x n , is often denoted as R[x 1 , . . . , x n ] to emphasize that these polynomials <strong>for</strong>ma ring. To eliminate reference to a particular set of independent variables, we will denotethe set of all polynomials in n variables with real coefficients as R n , with the assumptionthat if p ∈ R n <strong>and</strong> f ∈ R n then p <strong>and</strong> f are functions of the same independent variables.Additionally, define a subset of R n , R n,d := {p ∈ R n | deg p ≤ d}; this is just theset of all polynomials in n variables that have maximum degree d. If all the monomials ofpolynomial p are of the same degree, say d, then p is called homogeneous <strong>and</strong> it obeys therelation p(λx) = λ d p(x) <strong>for</strong> any scalar λ.Another subset of R n is the set of positive semidefinite (PSD) polynomials, whichare nonnegative on all of R n . This set is defined as P n := {p ∈ R n |p(x) ≥ 0, ∀x ∈ R n }.Also define P n,d := P n ∩ R n,d .Following st<strong>and</strong>ard notation <strong>for</strong> the real numbers, we will define any of these setsraised to a integer power, m, to denote an m-vector whose elements are drawn from theindicated set; as an example, R m ndenotes an m-vector of polynomials in n variables.


102.1 Sum-of-Squares <strong>Polynomial</strong>sA very important subset of the polynomials are the Sum-of-Squares (SOS) polynomials.Let Σ n be the set of all SOS polynomials in n variables, which is defined as{ ∣ ∣∣∣∣ M∑Σ n := s ∈ R n ∃M < ∞, ∃{p i } M i=1 ⊂ R n such that s =The SOS polynomials take their name from the fact that they can be represented as sumsof squares of other polynomials. Additionally, define Σ n,d = Σ n ∩ R n,d .i=1p 2 i}2.1.1 Properties of the Set of SOS <strong>Polynomial</strong>sSince every s ∈ Σ n is a sum of squared polynomials, it is clear that s(x) ≥ 0,∀x ∈ R n , which implies that Σ n ⊆ P n . An interesting question is whether the set of SOSpolynomials is equal to or strictly contained in the set of positive semidefinite polynomials.Hilbert showed that, when restricted to homogeneous polynomials, there are onlythree cases of n, d such that Σ n,d = P n,d . These results can be translated to generalpolynomials, see §3.2 in [21], to prove that Σ n,d = P n,d only <strong>for</strong>• <strong>Polynomial</strong>s in one variable, n = 1.• Quadratic polynomials, d = 2.• Quartics in two variables, n = 2, d = 4.Thus, in general Σ n,d ⊂ P n,d . Hilbert’s method to construct a polynomial inP n \ Σ n is very complicated <strong>and</strong> he did not use it to demonstrate any examples. In [29],Reznick gives an overview of the technique used, its relation to Hilbert’s 17th problem, as


11well as, a series of examples derived by more modern methods. One of the first examplesexhibited dates from 1965 <strong>and</strong> is the Motzkin polynomial, M(x, y, z) given below, from [19].M(x, y, z) = x 4 y 2 + x 2 y 4 + z 6 − 3x 2 y 2 z 2This polynomial can be shown to be positive semidefinite using the arithmetic-geometricinequality a+b+c3≥ (abc) 1 3 with (a, b, c) = (x 4 y 2 , x 2 y 4 , z 6 ). By methods to be describedlater, §2.1.2, it can be shown to not be SOS.2.1.2 Computational Aspects of SOS <strong>Polynomial</strong>sWorking with polynomials in P n can be difficult since there is no full parameterizationof the set, nor, in general, are there efficient tests to check if a given polynomial isin the set. However, given the number of variables, n, <strong>and</strong> degree of polynomials, d, wecan <strong>for</strong>m a full parameterization of Σ n,d , which directly leads to an efficient semidefiniteprogramming test to check if a polynomial is SOS (see Appendix A <strong>for</strong> a brief overview ofsemidefinite programming).A full parameterization of fixed degree SOS polynomialsFirst we note that SOS polynomials must always be of even degree, so we willconsider the parameterization of the set Σ n,2d <strong>for</strong> some n, d ∈ Z + . The following lemmaprovides the starting point <strong>for</strong> the parameterization.Lemma 1 If s ∈ Σ n,2d , then there exist p i ∈ R n,d , i = 1, . . . , M, <strong>for</strong> some finite M suchthats =M∑i=1p 2 i


12This lemma is a restricted version of Theorem 1 in [28], which gives tighter restrictions onthe p i ’s when s is known.Using Lemma 1, we can pose a full parameterization, often referred to as the “Grammatrix” approach, [7]. First, define z n,d to be the vector of all monomials in n variablesof degree less than or equal to d ordered in the following manner. Given α, β ∈ Z n +, x αprecedes x β if deg x α < deg β or if deg x α = deg β <strong>and</strong> the first entry of α−β that is strictlynegative is preceded by a strictly positive entry. As an example, with n = 2, d = 2⎡ ⎤1x 1x 2z 2,2 (x) :=x 2 1x 1 x 2⎢ ⎥⎣ ⎦For a general pair of n <strong>and</strong> d, z n,d (x) will be a ( )n+dd -vector.asx 2 2With the definition of z n,d (x) it is possible to characterize a polynomial, p ∈ R n,2d ,p(x) = z ∗ n,d (x)Qz n,d(x)where Q is the “Gram” matrix. The idea of representing polynomials as quadratic <strong>for</strong>msof vectors of monomials predates [7] by many years. The earliest quadratic representation,dating from 1968, [4], started a framework which was used to find homogeneous polynomial<strong>Lyapunov</strong> functions in [36]. The following result, which first appeared as Proposition 2.3 in[7], generalizes the earlier works <strong>and</strong> establishes when a polynomial is SOS.


13Theorem 1 Fix p ∈ R n,2d .p ∈ Σ n,2d if <strong>and</strong> only if there exists a Q ≽ 0 such thatp(x) = z n,d (x) ∗ Qz n,d (x).Proof:⇒ If p ∈ Σ n,2d then via Lemma 1 we know that there exist p i ∈ R n,d , i = 1, . . . , M suchthat p = ∑ Mi=1 p2 i . Writing each of these polynomials as p i(x) = q ∗ i z n,d(x) with q i a realvector of appropriate dimension, we havep(x) ==M∑i=1( ) 2qi ∗ z n,d (x)M∑zn,d ∗ (x)q iqi ∗ z n,d (x)i=1( M)= zn,d ∗ (x) ∑q i qi∗ z n,d (x)i=1= z ∗ n,d (x)Qz n,d(x)From its construction it is clear that Q ≽ 0.⇐ Since Q ≽ 0, we can factor Q = ∑ ri=1 q i q∗ i where r is the rank of Q. Thenreversing the argument abovep(x) =( r∑zn,d ∗ (x) q i qi∗i=1z n,d (x)r∑= zn,d ∗ (x)q iqi ∗ z n,d (x)=(a)=i=1r∑i=1( ) 2qi ∗ z n,d (x)r∑p 2 i (x)i=1where (a) comes from defining p i (x) := q ∗ i z n,d(x).✷)


14Corollary 1 The number of terms <strong>for</strong> an SOS decomposition can be chosen to be the numberof elements in z n,d or fewer.This theorem gives necessary <strong>and</strong> sufficient conditions <strong>for</strong> a polynomial to beSOS, however <strong>for</strong> p ∈ Σ n,2d there are, in general, many symmetric Q such that p(x) =z ∗ n,d (x)Qz n,d(x) <strong>and</strong> some are not positive semidefinite as the following example shows.Example 1 Take p ∈ R 2,4 to be such that p(x) = x 4 1 + x2 1 x2 2 + x4 2 . Both Q 1 <strong>and</strong> Q 2 beloware such that z2,2 ∗ (x)Q iz 2,2 (x) = p(x).⎡ ⎤∗⎡⎤⎡⎤⎡⎤∗⎡10 0 0 0 0 0110 0 0 0 0 01x 10 0 0 0 0 0x 1x 10 0 0 0 0 0x 1x 20 0 0 0 0 0x 2x 20 0 0 0 0 0x 2== p(x)x 2 10 0 0 1 0 0x 2 1x 2 10 0 0 1 0 1x 2 1x 1 x 20 0 0 0 1 0x 1 x 2x 1 x 20 0 0 0 −1 0x 1 x 2⎢ ⎥ ⎢⎥⎢⎥ ⎢ ⎥ ⎢⎥⎢⎥⎣ ⎦ ⎣⎦⎣⎦ ⎣ ⎦ ⎣⎦⎣⎦x 2 2 0 0 0 0 0 1 x 2 2 x 2 2 0 0 0 1 0 1 x 2 2} {{ }} {{ }Q 1Q 2Note that Q 1 ≽ 0 while Q 2 ⋡ 0. Q 1 ’s positive semidefiniteness shows that p ∈ Σ 2,4 , whileQ 2 ⋡ 0 shows nothing.⎤⎡⎤An LMI Test <strong>for</strong> SOSTheorem 1 gives the complete parameterization of the SOS polynomials <strong>for</strong> a givennumber of variables <strong>and</strong> fixed degree, however it does not give a method to check if a givenpolynomial is SOS.In an ef<strong>for</strong>t to underst<strong>and</strong> the set of matrices Q that make zn,d ∗ (x)Qz n,d(x) = p(x)<strong>for</strong> some p ∈ R n,2d , pick the st<strong>and</strong>ard basis <strong>for</strong> symmetric matrices, {E i }, of the appropriate


15size, ( ) (n+dd × n+d)d . Working out z∗n,d(x) ( ∑ q i E i ) z n,d (x) <strong>and</strong> equating coefficients withp(x) shows that set of matrices that make the equality hold are an affine subspace of thesymmetric matrices as was shown in [22].Given p ∈ R n,2d , let Q 0 be any symmetric matrix such thatz ∗ n,d (x)Q 0z n,d (x) = p(x)<strong>and</strong> let {Q i } nqi=1be the set of symmetric matrices such thatz ∗ n,d (x)Q iz n,d (x) = 0With this setup, we can define the affine subspace of symmetric matrices related to p as{n q}Q p := {Q|zn,d ∗ (x)Qz ∑ ∣ ∣∣λin,d = p(x)} = Q 0 + λ i Q i ∈ R, i = 1, . . . , n qThe following example illustrates the general procedure <strong>for</strong> finding the set of Q i ’s that definethe subspace.i=1Example 2 For p ∈ R 2,4 find some symmetric Q 0 such that z2,2 ∗ (x)Q 0z 2,2 (x) = p(x).Picking the st<strong>and</strong>ard basis <strong>for</strong> symmetric matrices, we write z2,2 ∗ (x)Qz 2,2(x) = 0 as⎡ ⎤1x 1x 2x 2 1x 1 x 2⎢ ⎥⎣ ⎦x 2 2equating terms we have⎤ ⎡ ⎤q 1 q 2 q 3 q 4 q 5 q 61q 2 q 7 q 8 q 9 q 10 q 11x 1q 3 q 8 q 12 q 13 q 14 q 15x 2= 0q 4 q 9 q 13 q 16 q 17 q 18x 2 1q 5 q 10 q 14 q 17 q 19 q 20x 1 x 2⎢⎥ ⎢ ⎥⎣⎦ ⎣ ⎦q 6 q 11 q 15 q 18 q 20 q 21∗ ⎡x 2 2


16Monomial Coefficient1 q 1x 1 2q 2x 2 2q 3x 2 1 q 7 + 2q 4x 1 x 2 2q 8 + 2q 5x 2 2 q 12 + 2q 6x 3 1 2q 9x 2 1 x 2 2q 10 + 2q 13x 1 x 2 2 2q 11 + 2q 14x 3 2 2q 15x 4 1 q 16x 3 1 x 2 2q 17x 2 1 x2 2 q 19 + 2q 18x 1 x 3 2 2q 20x 4 2 q 21<strong>and</strong> since each coefficient must be identically zero, we can find the subspace of matrices suchthat z ∗ 2,2 (x)Qz 2,2(x) = 0 to be{Q|z2,2(x)Qz ∗ 2,2 (x) = 0} = {λ 1 (2E 7 − E 4 ) +λ} {{ } 2 (E 8 − E 5 ) +λ} {{ } 3 (2E 12 − E 6 )} {{ }Q 1 Q 2+λ 4 (E 10 − E 13 ) +λ} {{ } 5 (E 11 − E 14 )} {{ }Q 4 Q 5+λ 6 (2E 19 − E 18 ) |λ} {{ } 1 , . . . , λ 6 ∈ R}Q 66∑= { λ i Q i |λ i ∈ R}i=1which shows how to find Q p <strong>for</strong> a polynomial with fixed number of variables <strong>and</strong> degree.Q 3In [22], Powers <strong>and</strong> Wörmann got as far as finding the affine subspace, whichallowed them to give the equivalence p ∈ Σ n,2d iff ∃Q ∈ Q p such that Q ≽ 0. However, theydid not recognize that checking if there existed λ i ’s to make Q 0 + ∑ λ i Q i ≽ 0 was convex<strong>and</strong> just an LMI feasibility problem, instead they proposed a less efficient search methodusing quantifier elimination. In [21], Parrilo realized that the existence of a Q ≽ 0, can besolved as an LMI <strong>and</strong> gave the following theorem.


17Theorem 2 ([21], Theorem 3.3) Given p ∈ R n,2d , find the relevant affine subspace Q p ={Q 0 + ∑ i λ iQ i |λ i ∈ R}. p ∈ Σ n,2d iff the following LMI is feasible∃λ is.t. Q 0 + ∑ λ i Q i ≽ 0Proof:From Theorem 1, we know that p ∈ Σ n,2d iff there exists a Q ≽ 0 such that z2n,d ∗ (x)Qz n,d(x) =p(x), so we only need search over Q p , which is exactly the LMI given.✷Parrilo also introduced the following important extension that can be proved in asimilar manner to Theorem 2.Theorem 3 ([21], §3.2) Given a finite set {p i } m i=0 ∈ R n, the existence of {a i } m i=1 ∈ Rsuch thatis an LMI feasibility problem.m∑p 0 + a i p i ∈ Σ ni=1This theorem is very useful since it allows to answer questions like the following example.Example 3 Given p 0 , p 1 ∈ R n , does there exist k ∈ R n such thatp 0 + kp 1 ∈ Σ n (2.1)To answer this question, write k as a linear combination of its monomials {m j }, k =∑ sj=1 a jm j . Rewrite (2.1) using this decompositions∑p 0 + kp 1 = p 0 + a j (m j p 1 )j=1


18which, since (m j p 1 ) ∈ R n , can be checked by Theorem 3.A software package, SOSTOOLS [23, 24], was written to aid in solving the LMIsthat result from Theorem 3. This package sets up the LMIs from the polynomial problems,does some smart preprocessing to reduce problem size <strong>and</strong> uses Sturm’s SeDuMi semidefiniteprogramming solver, [33], to solve the LMIs.Additional computational gains can be had by exploiting polynomial symmetries,[9], <strong>and</strong> using the Newton polytope algorithm presented in [7] to reduce the number ofmonomials in the Gram matrix <strong>for</strong>mulation, which makes the resulting LMIs smaller withfewer free parameters. These computational improvements are set to appear in the nextrelease of SOSTOOLS.2.2 The PositivstellensatzHaving introduced SOS polynomials it is now possible to make the algebraic definitionsthat are necessary to present one of the seminal theorems of real algebraic geometry,which generalizes many known results including the S-Procedure, as shown in §2.2.1.Definition 3 Given {g 1 , . . . , g t } ∈ R n , the Multiplicative Monoid generated by g j ’s isthe set of all finite products of g j ’s, including the empty product, defined to be 1.It isdenoted as M(g 1 , . . . , g t ). For completeness define M(φ) := 1.An example: M(g 1 , g 2 ) = {g k 11 gk 22 | k 1, k 2 ∈ Z + }


19Definition 4 Given {f 1 , . . . , f s } ∈ R n , the Cone generated by f i ’s isP(f 1 , . . . , f s ) :={s 0 + ∑ }s i b i | s i ∈ Σ n , b i ∈ M(f 1 , . . . , f s )For completeness note that P(φ) := Σ n .Remembering that if f ∈ R n , s ∈ Σ n , then sf 2 ∈ Σ n , allows us to write any cone as a sumof 2 s terms. Note that this reduction in the number of free SOS polynomials need not bebeneficial.An example: P(f 1 , f 2 ) = {s 0 + s 1 f 1 + s 2 f 2 + s 3 f 1 f 2 | s 0 , . . . , s 3 ∈ Σ n }Definition 5 Given {h 1 , . . . , h u } ∈ R n , the Ideal generated by h k ’s is{∑ }I(h 1 , . . . , h u ) := hk p k | p k ∈ R nFor completeness note that I(φ) := 0.With these definitions we can state the following theorem which is a version of theoriginal theorem in [31] restricted to R n .Theorem 4 (Positivstellensatz [3, Theorem 4.2.2] ) Given sets of polynomials {f 1 , . . . , f s },{g 1 , . . . , g t }, <strong>and</strong> {h 1 , . . . , h u } in R n , the following are equivalent:1. The set ⎧⎪ ⎨⎪ ⎩x ∈ R n ∣ ∣∣∣∣∣∣∣∣∣∣∣f 1 (x) ≥ 0, . . . , f s (x) ≥ 0,g 1 (x) ≠ 0, . . . , g t (x) ≠ 0,h 1 (x) = 0, . . . , h u (x) = 0⎫⎪⎬⎪⎭is empty,


202. There exist polynomials f ∈ P(f 1 , . . . , f s ), g ∈ M(g 1 , . . . , g t ), h ∈ I(h 1 , . . . , h u ) suchthatf + g 2 + h = 02.2.1 ExamplesTo rein<strong>for</strong>ce the usefulness of the Positivstellensatz (P-satz), consider the range ofthe following examples that become convex <strong>and</strong> thus tractable when the P-satz is combinedwith the results of Theorem 3.Positivstellensatz CertificatesThe LMI based tests <strong>for</strong> SOS polynomials from Theorem 3 can be used to provethat the set emptiness condition from the P-satz holds, by finding specific f, g, <strong>and</strong> h suchthat f +g 2 +h = 0. These f, g, <strong>and</strong> h are known as P-satz certificates since they certify thatthe equality holds. The following theorem states precisely how semidefinite programmingcan be used to search <strong>for</strong> certificates.Theorem 5 (Theorem 4.8, [21]) Given polynomials {f 1 , . . . , f s }, {g 1 , . . . , g t }, <strong>and</strong> {h 1 , . . . , h u }in R n , if the set{x ∈ R n |f i (x) ≥ 0, g j (x) ≠ 0, h k (x) = 0, i = 1, . . . , s, j = 1, . . . , t, k = 1, . . . , u}is empty then the search <strong>for</strong> bounded degree Positivstellensatz refutations can be done usingsemidefinite programming. If the degree bound is chosen large enough the semidefiniteprograms will be feasible <strong>and</strong> give the refutation certificates.


21The proof of this theorem involves writing out all of the terms of P(f 1 , . . . , f s ),M(g 1 , . . . , g t ), <strong>and</strong> I(h 1 , . . . , h u ) to <strong>for</strong>m the equality constraint, f + g 2 + h = 0. For afixed degree d, set the term g ∈ M(g 1 , . . . , g t ) such that it is of degree greater than or equalto d/2, then pick each of the free polynomials in f <strong>and</strong> h such that they have degree atleast d. Now run the LMI with the equality constraint as well as the SOS constraints onthe free polynomials in f. If you search over all g <strong>for</strong> each d, then you eventually find thePositivstellensatz certificates.An LMI test <strong>for</strong> P nUsing the P-satz, we can now test to see if a polynomial p ∈ R n is in P n . If p ∈ P n ,then ∀x ∈ R n , p(x) ≥ 0. Equivalently {x ∈ R n |p(x) < 0} is empty, or in the P-satz <strong>for</strong>mat{x ∈ R n | − p(x) ≥ 0, p(x) ≠ 0} is emptyThis condition holds iff ∃f ∈ P(−p) <strong>and</strong> g ∈ M(p) such that f +g 2 = 0. Using the definitionof the cone <strong>and</strong> the monoid, p ∈ P n iff ∃s 0 , s 1 ∈ Σ n <strong>and</strong> k ∈ Z + such thats 0 − ps 1 + p 2k = 0If we fix k <strong>and</strong> the degree of s 1 to be d, we can rewrite the conditions above as p ∈ P n iff∃s 1 such thats 1 ∈ Σ n,dps 1 − p 2k ∈ Σ n, ˆd(2.2)with ˆd = max(2k deg p, d + deg p). For fixed k <strong>and</strong> d we know that, via Theorem 3, checkingthe conditions in (2.2) is just an LMI, so <strong>for</strong> fixed k <strong>and</strong> d we have an LMI sufficient


22condition <strong>for</strong> a polynomial to be PSD.The S-ProcedureWhat does the familiar S-procedure look like in the Positivstellensatz <strong>for</strong>malism?Given symmetric n × n matrices {A i } m i=0, the S-procedure states: if there exist nonnegativescalars {λ i } m i=1 such that A 0 − ∑ mi=1 λ iA i ≽ 0, thenm⋂{x ∈ R n |x ∗ A i x ≥ 0} ⊂ {x ∈ R n |x ∗ A 0 x ≥ 0}i=1Rephrased as a set emptiness question, we would like to know ifW := {x ∈ R n |x ∗ A 1 x ≥ 0, . . . , x ∗ A m x ≥ 0, −x ∗ A 0 x ≥ 0, x ∗ A 0 x ≠ 0}is empty?If the λ i exist, define Q := A 0 − ∑ mi=1 λ iA i . By assumption Q ≽ 0 <strong>and</strong> thusx ∗ Qx ∈ Σ n . Define g(x) := x ∗ A 0 x ∈ M(x ∗ A 0 x) as well asf(x) := (x ∗ Qx)(−x ∗ A 0 x) +m∑λ i (−x ∗ A 0 x)(x ∗ A i x)By their non-negativity each λ i ∈ Σ n <strong>and</strong> because x ∗ Qx ∈ Σ n we know that the functionf(x) is in the cone P (x ∗ A 1 x, . . . , x ∗ A m x, −x ∗ A 0 x). An easy rearrangement gives f +g 2 = 0,which illustrates that f <strong>and</strong> g are Positivstellensatz certificates that prove that W is empty.i=1A generalized S-ProcedureThe S-procedure given above can be generalized to deal with non-quadratic functions<strong>and</strong> non-scalar weights in the following way. Given {p i } m i=0 ∈ R n, if there exist


23{s i } m i=1 ∈ Σ n such thatm∑p 0 − s i p i = qi=1with q ∈ Σ n , which <strong>for</strong> fixed degree s i ’s can be checked with Theorem 3, thenm⋂{x ∈ R n |p i (x) ≥ 0} ⊂ {x ∈ R n |p 0 (x) ≥ 0}i=1The related set emptiness question asks ifW := {x ∈ R n |p 1 (x) ≥ 0, . . . , p m (x) ≥ 0, −p 0 (x) ≥ 0, p 0 (x) ≠ 0}is empty. Similar to the st<strong>and</strong>ard S-procedure approach, define g := p 0 ∈ M(p 0 ) as well asn∑f := −qp 0 − s i p 0 p iSince q as well as the s i ’s are SOS, f ∈ P(p 1 , . . . , p m , −p 0 ). Verifying f + g 2 = 0,f + g 2 = −qp 0 −(= − p 0 −= 0i=1n∑s i p 0 p i + p 2 0i=1)m∑s i p i p 0 −i=1n∑s i p 0 p i + p 2 0i=1illustrating that f <strong>and</strong> g provide certificates that the set W is empty.2.2.2 Theorems Related to the PositivstellensatzThe multidimensional moment problem, which considers when a sequence of numbersare the moments of some nonnegative Borel measure on R n , has a long relation withSOS polynomials, [1]. Many interesting results about SOS polynomials have been generatedfrom this approach <strong>and</strong> two theorems that are especially interesting to polynomial


24optimization are presented below.The setup is as follows; let {f i } m i=1 ∈ R n <strong>and</strong> defineK := {x|f 1 (x) ≥ 0, · · · , f m (x) ≥ 0}.Theorem 6 (Corollary 3, [30]) If K is compact <strong>and</strong> p ∈ R n is such that p(x) > 0 <strong>for</strong>all x ∈ K, then p ∈ P(f 1 , · · · , f m ).This tells us that if a polynomial is positive on K then it is in the cone generated by the polynomialsthat describe K. With one additional assumption this result can be strengthenedfurtherTheorem 7 (Lemma 4.1, [25]) Let K be compact. p ∈ R n such that p(x) > 0 <strong>for</strong> allx ∈ K belongs to the set{s 0 + f 1 s 1 + · · · + f m s m |s 0 , · · · , s m ∈ Σ n }if <strong>and</strong> only if there is a polynomial g in the set with the property that g −1 [0, ∞) is compactin R n .These theorems can be used to define certificate searches with fewer terms thanthe P-satz would require, however, these smaller searches can require polynomials of muchhigher degree. In [32], Stengle provides a simple example that requires unbounded degreepolynomial certificates using Theorem 7, but has degree four certificates if the P-satz isused, as shown in [6].


25Applications to <strong>Polynomial</strong> OptimizationConsider the following optimization problemminx∈R nf 0(x)s.t. f 1 (x) ≥ 0.f m (x) ≥ 0 (2.3)with {f i } m i=0 ∈ R n <strong>and</strong> no assumed convexity. Let the optimum value be f ⋆ > −∞ withf 0 (x ⋆ ) = f ⋆ .Lasserre, [18], noticed that if the feasible region is compact we can always satisfythe requirements of Theorem 7 by adding an additional constraint on norm of x, f m+1 (x) :=a − ‖x‖ 2 ≥ 0, since fm+1 −1 [0, ∞) is clearly compact. Additionally, if γ is a lower bound onf ⋆ , then f 0 (x) > γ at any feasible point, which implies that f 0 (x) − γ > 0 <strong>for</strong> {x|f 1 (x) ≥0, · · · , f m+1 (x) ≥ 0}. Using the theorem, we can rewrite the optimization (2.3) asmax γs.t. f 0 − γ = s 0 + f 1 s 1 + · · · + f m+1 s m+1s 0 , · · ·, s m+1 ∈ Σ n (2.4)which Theorem 3 shows to be an LMI, as long as the degree of the SOS polynomials isfixed, however, the degree <strong>for</strong> which the equality constraint in (2.4) holds is unknown.By increasing the maximum degree of the s i ’s, this approach allows <strong>for</strong> a series of convexrelaxations to the nonconvex problem (2.3), which are shown to monotonically converge tof ⋆ in [18]. A software package to carry out this algorithm is described in [12].


262.3 Chapter SummaryIn this chapter we provided the polynomial background <strong>for</strong> all of the results inthe following chapters. Most importantly, we illustrated the convex optimization approachto checking if a given polynomial is a SOS polynomial. Additionally, we introduced thePositivstellensatz <strong>and</strong> illustrated some of its applications, which we will exp<strong>and</strong> on in thefollowing chapters.


27Chapter 3Global <strong>Analysis</strong> <strong>and</strong> <strong>Controller</strong><strong>Synthesis</strong>If we consider the systemẋ(t) = f(x(t)) (3.1)<strong>for</strong> x(t) ∈ R n with f ∈ R n n as well as f(0) = 0, we can pose many global system theoreticquestions about its behavior as searches <strong>for</strong> SOS polynomials. However, first we need a fewdefinitions that will allow us to make the <strong>Lyapunov</strong> based stability arguments that will beat the heart of the polynomial searches.Define the flow of the system (3.1) starting from a point x 0 ∈ R n <strong>and</strong> evolving<strong>for</strong>ward <strong>for</strong> t time units to be φ t (x 0 ). Additionally <strong>for</strong> a differentiable scalar function V ,defined on the same state space as (3.1), define its derivative with respect to time, ˙V , asthe dot product between its gradient, ∇V , <strong>and</strong> f,˙V (x) := ∇V (x) ∗ f(x)


283.1 Stability BackgroundUsing a polynomial as the <strong>Lyapunov</strong> function, V , we will be able to prove globalasymptotic stability as well as semi-global exponential stability of the system (3.1) bychecking semialgebraic conditions on polynomials. However, we will first explicitly defineall of the terminology <strong>and</strong> prove the conditions that we will later exploit to design <strong>Lyapunov</strong>functions.3.1.1 DefinitionsDefinition 6 (Stability) The system (3.1) is stable about x = 0 if <strong>for</strong> every ɛ > 0 thereexists δ ɛ > 0 such that if ‖x 0 ‖ < δ ɛ , then ‖φ t (x 0 )‖ < ɛ <strong>for</strong> all t ≥ 0.Definition 7 (Asymptotic Stability) The system (3.1) is asymptotically stable aboutx = 0 if it is stable about x = 0 <strong>and</strong>, additionally, there exists h > 0 such that if ‖x 0 ‖ < h,then limt→∞‖φ t (x 0 )‖ = 0. Furthermore, if ∀x 0 ∈ R n , limt→∞‖φ t (x 0 )‖ = 0 then the system isglobally asymptotically stable.Definition 8 (Exponential Stability) The system (3.1) is exponentially stable about x =0 if there exist m, c, h > 0 such that‖φ t (x 0 )‖ ≤ me −ct ‖x 0 ‖<strong>for</strong> all ‖x 0 ‖ < h <strong>and</strong> t ≥ 0; c is also referred to as a convergence rate <strong>for</strong> the system.Furthermore, if <strong>for</strong> every h > 0 there exist m <strong>and</strong> c that depend on h <strong>and</strong> validatethe inequality, then the system is semi-globally exponentially stable. If one fixed pair of m<strong>and</strong> c validate the inequality <strong>for</strong> all h > 0, then the system is globally exponentially stable.


29Note that both semi-global <strong>and</strong> global exponential stability imply global asymptotic stability.Additionally, we need to define the following classes of functions.Definition 9 (K ∞ Functions) A function σ : R → R is called a K ∞ function if it iscontinuous, strictly increasing, <strong>and</strong> has the properties σ(0) = 0 <strong>and</strong> σ(ξ) → ∞ as ξ → ∞.Definition 10 (Positive Definite Functions) A function ρ : R n → R is called positivedefinite if it is continuous, has the property ρ(0) = 0, <strong>and</strong> there exists some K ∞ function σsuch thatσ(‖x‖) ≤ ρ(x)<strong>for</strong> all x ∈ R n .3.1.2 Stability TheoremsWith the definitions above we can state <strong>and</strong> prove the st<strong>and</strong>ard <strong>Lyapunov</strong> theorem<strong>for</strong> global asymptotic stability as well as an extension <strong>for</strong> semi-global exponential stability.Theorem 8 (<strong>Lyapunov</strong>) The system (3.1) is globally asymptotically stable about its equilibriumpoint if there exists a positive definite function V : R n → R + such that − ˙V is alsopositive definite.Proof:The proof below follows along the lines of the <strong>Lyapunov</strong> theorem proof in [16, §3.1].By definition there exist K ∞ functions α, β such that α(‖x‖) ≤ V (x), ∀x ∈ R n <strong>and</strong>β(‖x‖) ≤ − ˙V (x), ∀x ∈ R n . We will first prove stability followed by asymptotic convergenceto the fixed point.


30Given any ɛ > 0 pick δ ɛ such thatsup V (x) < α(ɛ)‖x‖ 0, we need to find a T > 0 such that ‖φ t (x 0 )‖ < ɛ <strong>for</strong> all t ≥ T . If x 0 = 0 then thereis nothing to prove, so we can assume that x 0 ≠ 0. Assume that there exists an x 0 ≠ 0 <strong>and</strong>ɛ > 0 such that ‖φ t (x 0 )‖ ≥ ɛ <strong>for</strong> all t > 0. By integration we know that∫ tV (φ t (x 0 )) = V (x 0 ) +0˙V (φ τ (x 0 )) dτUsing the positive definiteness of V as well as the fact that ‖φ t (x 0 )‖ ≥ ɛ we can bound theexpression above from below,∫ tα(ɛ) ≤ α(‖φ t (x 0 )‖) ≤ V (φ t (x 0 )) = V (x 0 ) +0˙V (φ τ (x 0 )) dτ


31Additionally using the positive definiteness of − ˙V <strong>and</strong> the fact that ‖φ t (x 0 )‖ ≥ ɛ we canbound V (φ t (x 0 )) from above,∫ tV (φ t (x 0 )) = V (x 0 ) +0End-to-end we now have˙V (φ τ (x 0 )) dτ ≤ V (x 0 ) −∫ tα(ɛ) ≤ V (x 0 ) − tβ(ɛ)0β(‖φ τ (x 0 )‖) dτ ≤ V (x 0 ) − tβ(ɛ)<strong>for</strong> all t > 0. However, if t ≥ V (x 0 )/β(ɛ) this implies that α(ɛ) ≤ 0, which contradicts ɛ > 0.✷Building from Theorem 8, we can now prove the following theorem <strong>for</strong> semi-globalexponential stability.Theorem 9 If there exists a function V , such that V (x) ≥ α‖x‖ d d ∀x ∈ Rn , where α > 0<strong>and</strong> d is an integer greater than one, as well as a γ > 0 such that˙V (x) ≤ −γV (x)<strong>for</strong> all x ∈ R n , then the system (3.1) is semi-globally exponentially stable about its fixedpoint, x = 0, with a convergence rate of γ/d.Proof:By definition α‖x‖ d d ≤ V (x), ∀x ∈ Rn .Clearly the function α(·) d is in class K ∞ , <strong>and</strong>γα‖x‖ d d ≤ γV (x) ≤ − ˙V (x), <strong>for</strong> all x ∈ R n , which shows that − ˙V is positive definite <strong>and</strong>there<strong>for</strong>e the system (3.1) is globally asymptotically stable about x = 0.For x ≠ 0, V (x) > 0 which allows us to write one of the assumptions as˙V (x)V (x) ≤ −γ


32ordlog(V (x)) ≤ −γdtwhich can be integrated over [0, t] starting from x 0 to givelog(V (φ t (x 0 ))) ≤ log(V (x 0 )) − γtwhich gives the exponential boundV (φ t (x 0 )) ≤ V (x 0 )e −γtthat proves that V (φ t (x 0 )) decays exponentially with rate γ. Since α‖φ t (x 0 )‖ d d ≤ V (φ t(x 0 ))<strong>for</strong> all t > 0, we can make the following bound( ) V‖φ t (x 0 )‖ d d ≤ (x0 )α‖x 0 ‖ d e −γt ‖x 0 ‖ d ddor‖φ t (x 0 )‖ d ≤ me −ct ‖x 0 ‖ dwith m =(V (x0 )α‖x 0 ‖ d d) 1d<strong>and</strong> c = γ/d. Since m depends on x 0 <strong>and</strong> the inequality holds <strong>for</strong>all x 0 ∈ R n the system is semi-globally exponentially stable. Interestingly the convergencerate can be chosen to be γ/d <strong>for</strong> all x 0 .✷Remark 1 If the system (3.1) has equilibrium points away from x = 0, then the systemcan not be semi-globally exponentially stable nor can it be globally asymptotically stable.


333.2 Convex Stability TestsWith the previous section’s <strong>Lyapunov</strong> background, we can now look to designing<strong>Lyapunov</strong> functions <strong>for</strong> given dynamic systems. Our approach is to design <strong>Lyapunov</strong>functions using the assumptions of Theorems 8 <strong>and</strong> 9 as constraints of an optimizationthat searches over a class of c<strong>and</strong>idate <strong>Lyapunov</strong> functions. This optimization approach isextensively used <strong>and</strong> is usually designed so that the resulting optimization is convex. [5]presents a survey of convex optimization results relating to analysis <strong>for</strong> linear systems withquadratic <strong>Lyapunov</strong> functions. These ideas are extended <strong>for</strong> smooth nonlinear systems withlinearly parameterized non-quadratic <strong>Lyapunov</strong> functions in [15]. Additionally a convex optimizationapproach using the concept of a dual to the <strong>Lyapunov</strong> function is provided in[27].Our approach is to consider polynomial systems <strong>and</strong> restrict the set of c<strong>and</strong>idate<strong>Lyapunov</strong> functions to be polynomials. By doing so, we can <strong>for</strong>mulate the following convexstability tests.Lemma 2 Given the system (3.1) <strong>and</strong> fixed positive definite functions l 1 , l 2∈ R n , thesystem is globally asymptotically stable if there exists V ∈ R n with V (0) = 0 such thatV − l 1∈ Σ n−(∇V ∗ f + l 2 ) ∈ Σ n .Proof:From Theorem 3, it is clear that the conditions that V − l 1 <strong>and</strong> −(∇V ∗ f + l 2 ) are SOSpolynomials can be checked as LMIs.If a V is found that meets these conditions, thepositive definiteness of l 1 <strong>and</strong> l 2 insure that both V <strong>and</strong> − ˙V are positive definite, which


34meets the assumptions of Theorem 8 making the system globally asymptotically stable.✷Note that if the dynamics were chosen to be linear, f(x) = Ax <strong>and</strong> the <strong>Lyapunov</strong> functionto be quadratic V (x) = x ∗ P x then the lemma’s SOS conditions can be simplified to theLMI conditions P ≻ 0 <strong>and</strong> A ∗ P + P A ≺ 0.A set of sufficient conditions which are very similar to Lemma 2 appear in [20],where they are used to prove non-asymptotic stability <strong>for</strong> systems like (3.1) with additionalstate <strong>and</strong> control equality <strong>and</strong> inequality constraints.Looking to Theorem 9, we can <strong>for</strong>mulate a similar set of polynomial conditions,however due to a term that is bilinear in γ <strong>and</strong> V , we can not check the assumptions witha single LMI.Lemma 3 Given the system (3.1) <strong>and</strong> the fixed positive definite function l(x) = ‖x‖ d d withd an integer greater than one, the system is semi-globally exponentially stable if there existsγ > 0 <strong>and</strong> V ∈ R n with V (0) = 0 such thatV − l−(γV + ∇V ∗ f)∈ Σ n∈ Σ nwhich when V has fixed degree can be checked by per<strong>for</strong>ming a linesearch on γ <strong>and</strong> solvingthe resulting LMI at each point. Additionally, γ/d is a rate of convergence <strong>for</strong> the system.Proof:The proof follows along the same lines as the proof <strong>for</strong> Lemma 2 by establishing that theassumptions <strong>for</strong> Theorem 9 are met.In general the value <strong>for</strong> d in this <strong>and</strong> other related lemmas will be picked to be


35the highest even degree in V .✷Again, if the dynamics are linear <strong>and</strong> the <strong>Lyapunov</strong> function is quadratic the SOSconditions of Lemma 3 collapse into simple LMIs, which are remain bilinear in γ, P ≻ 0<strong>and</strong> A ∗ P + P A ≼ −γP .3.2.1 Stability ExamplesConsider the following system⎡ ⎤ ⎡ ⎤x˙1⎢ ⎥⎣ ⎦ = ⎢⎥⎣ ⎦x˙2 x 1 − x 3 2} {{ }f(x)(3.2)where it is clear that f(0) = 0. The linearization about the origin has eigenvalues of ±j, soit is not even possible to verify that the nonlinear system (3.2) is stable.If we look to construct a <strong>Lyapunov</strong> function to demonstrate stability, a simplequadratic <strong>Lyapunov</strong> function V (x) = ‖x‖ 2 2will work. This V is clearly positive definite,<strong>and</strong> if we compute its time derivative we find˙V = ∇V ∗ f= 2x 1 (−x 2 − x 3 1) + 2x 2 (x 1 − x 3 2)= −2(x 4 1 + x 4 2)which clearly makes − ˙V positive definite. The definiteness of V <strong>and</strong> − ˙V satisfy the assumptionsof Theorem 8, so it is clear that the system (3.2) is globally asymptotically stable.


36d γ max γ/d2 0 04 .0052 .00136 .0431 .00728 .1094 .013710 .1472 .0147Table 3.1: Results of applying Lemma 3 to system (3.2)However, it is unclear if the system is semi-globally exponentially stable, so we will useLemma 3 to set up a linesearch on a sum of squares optimization problem to construct a<strong>Lyapunov</strong> function to show that it is.If we follow the approach given in Lemma 3 <strong>for</strong> d = {2, 4, 6, 8, 10} we can constructthe table of maximal γ values given in Table 3.1. The table shows that <strong>for</strong> d = 4 we c<strong>and</strong>emonstrate semi-global exponential stability, since γ max > 0, <strong>and</strong> that the state decayswith rate γ/d = .0013. As the degree of the <strong>Lyapunov</strong> function is increased, the maximumfeasible value <strong>for</strong> γ increases, with γ/d increasing as well. For d > 10, the numerical errorsin solving the resulting LMIs cause the linesearch to become erratic <strong>and</strong> return lower values<strong>for</strong> γ max .3.3 Disturbance <strong>Analysis</strong>In the previous section, we considered two algorithmic approaches to construct<strong>Lyapunov</strong> functions that demonstrate specific types of stability. Now, we will consider the


37effect of an external disturbance, w(t) on the following systemẋ = f(x) + g w (x)wy = h(x)(3.3)with x(t) ∈ R n , w(t) ∈ R nw , y(t) ∈ R p , f ∈ R n n, f(0) = 0, g w ∈ R n×nwn , <strong>and</strong> h ∈ R p n withh(0) = 0. We will follow the <strong>Lyapunov</strong> approach used in [5, §6.3.2] <strong>and</strong> presented in thefollowing lemma to find the induced L 2 → L 2 gain from w(t) to y(t).Lemma 4 For a system whose dynamics are given by (3.3) with initial condition x 0 = 0,if there exists a positive definite function V : R n → R + such that˙V (x, w) + h(x) ∗ h(x) − αw ∗ w ≤ 0 (3.4)<strong>for</strong> all x ∈ R n <strong>and</strong> w ∈ R nw , then the induced L 2 → L 2 gain from w(t) to y(t) is less thanor equal to √ α.Proof:If we integrate the inequality (3.4) from 0 to T ≥ 0 we have∫ T ()V (φ T (x 0 )) − V (x 0 ) + h(x(t)) ∗ h(x(t)) − αw(t) ∗ w(t) dt ≤ 00<strong>and</strong> since V (x 0 ) = 0 <strong>and</strong> V (φ T (x 0 )) ≥ 0 we getwhich completes the proof.✷‖y(t)‖ 2‖w(t)‖ 2≤ √ αNote that if we assume w(t) := 0 then the inequality (3.4) shows that − ˙V is positivesemi-definite <strong>and</strong> thus that the system is stable.


38We can now use the SOS relaxations from the stability lemmas to pose the followinglemma that provides our best estimate of the induced gain from w(t) to y(t).Lemma 5 The best estimate of the induced L 2 → L 2 gain from w(t) to y(t) <strong>for</strong> the system(3.3) is √ α where α comes from the solution of the following optimization. Fix the degreeof the <strong>Lyapunov</strong> function to be d V . Let l ∈ R n be a fixed positive definite polynomial <strong>and</strong>V ∈ R n,dV with V (0) = 0minVαs.t.V − l ∈ Σ n()− ∇V (x) ∗ (f(x) + g w (x)w) + h(x) ∗ h(x) − αw ∗ w ∈Σ n+nwProof:The proof follows from the proof of Lemma 4.✷Using this lemma we can apply SOS programming to quantify a system’s abilityto reject disturbances.However, this approach can run into feasibility problems, sinceit requires that − ˙V (x, w) have terms to counteract the −h(x) ∗ h(x) in the second SOSconstraint.Examples of the use of this lemma are provided to analyze the disturbancerejection capabilities of the controllers that are designed as examples in §3.4.3 <strong>and</strong> §3.5.2.


393.4 State Feedback <strong>Controller</strong> DesignWe will now move from analysis to controller synthesis using <strong>Lyapunov</strong> techniques.The <strong>Lyapunov</strong> based approach to controller synthesis has produced many notable results<strong>and</strong> design procedures, <strong>and</strong> much of their history is given in [17]. Most of these resultscenter on designing a controller from a provided control <strong>Lyapunov</strong> function, while in ourapproach we use optimization to find both the controller <strong>and</strong> the <strong>Lyapunov</strong> function. Arelated approach using convex optimization to design a controller with a dual to the st<strong>and</strong>ard<strong>Lyapunov</strong> theorem is shown in [26].As shown in section 3.2, sufficient conditions <strong>for</strong> the assumptions of Theorems 8<strong>and</strong> 9 can be checked as LMIs or linesearches on LMIs. The existence of efficient tests <strong>for</strong>stability analysis brings us to apply similar techniques <strong>for</strong> controller synthesis. Considernow the systemẋ = f(x) + g(x)u (3.5)<strong>for</strong> x ∈ R n with f ∈ R n n, f(0) = 0 <strong>and</strong> u ∈ R m with g ∈ R n×mn . If we allow u to be generatedby a state feedback controller K ∈ R m nwith K(0) = 0, we get the following closed loopsystemẋ = f(x) + g(x)K(x) (3.6)where K is still unknown.Now we can look <strong>for</strong> conditions on K such that we can find a <strong>Lyapunov</strong> equationthat meets the assumptions <strong>for</strong> Theorems 8 <strong>and</strong> 9. The analog <strong>for</strong> Lemmas 2 <strong>and</strong> 3 <strong>for</strong> thesystem (3.6) are as follows.


40Lemma 6 Given fixed positive definite functions l 1 , l 2 ∈ R n , if there exists V ∈ R n withV (0) = 0 <strong>and</strong> K ∈ R m nwith K(0) = 0 such thatV − l 1−(∇V ∗ (f + gK) + l 2 )∈ Σ n∈ Σ nthen the system (3.5) is globally asymptotically stabilized by the control law u = K(x).Lemma 7 Given the fixed positive definite function l(x) = ‖x‖ d dwith d an integer greaterthan one, if there exists γ > 0, K ∈ R m nwith K(0) = 0 <strong>and</strong> V ∈ R n with V (0) = 0 suchthatV − l−(γV + ∇V ∗ (f + gK))∈ Σ n∈ Σ nthen the system (3.5) is semi-globally exponentially stabilized by the control law u = K(x).Additionally, γ/d is a rate of convergence <strong>for</strong> the closed loop system.The proofs <strong>for</strong> these lemmas follow exactly along the lines of the proofs of Lemmas2 <strong>and</strong> 3. However, since both lemmas have conditions that are bilinear in the monomials ofV <strong>and</strong> K, the SOS conditions of Lemma 6 nor those of 7 will have to be checked iteratively.3.4.1 Iterative State Feedback Design AlgorithmsSince, in general, Lemmas 6 <strong>and</strong> 7 are not amenable to the semidefinite programmingbased approach of Theorem 3, we will need to employ an iterative approach thatsolves the lemmas’ SOS conditions in V <strong>and</strong> K by holding one of these polynomials fixedwhile adjusting the other. Even though each step in the iteration will be convex, the overallproblem will remain non-convex. In some cases iterative design procedures of this type canbe shown to converge to answers away from the global optimum as the example in [8] shows.


41However, there is one special case when the SOS conditions in Lemmas 6 <strong>and</strong> 7can be checked directly with semidefinite programming. This case is when the dynamics<strong>and</strong> controller are linear <strong>and</strong> the <strong>Lyapunov</strong> function is quadratic. In this case we can use anonlinear change of coordinates from [2] which is often referred to as the “feedback trick,”to yield LMI conditions <strong>for</strong> Lemma 6 <strong>and</strong> a linesearch on LMI conditions <strong>for</strong> Lemma 7.Consider the general case <strong>for</strong> the SOS conditions of Lemma 6; the second SOScondition is bilinear in V <strong>and</strong> K as noted above, which indicates that if either is fixed itis an LMI feasibility problem to find the other. The problem with this approach is that ifV is fixed <strong>and</strong> the resulting convex search <strong>for</strong> K fails, then there is no way to redesign Vfrom the search on K. The same pitfall occurs if K is held fixed <strong>and</strong> the search is <strong>for</strong> V .This approach gives a single shot at finding a controller <strong>for</strong> a given <strong>Lyapunov</strong> function or<strong>Lyapunov</strong> function <strong>for</strong> a fixed controller, but it can not be extended to search <strong>for</strong> both.The conditions <strong>for</strong> Lemma 7 have better feasibility properties that allow us topropose a pair of iterative design procedures to establish semi-global exponential stability.The procedures either require either a c<strong>and</strong>idate controller or <strong>Lyapunov</strong> function to begantheir iterations. First we will consider an algorithm that starts the iterative search froma c<strong>and</strong>idate controller polynomial, the K variant, <strong>and</strong> then we will look at starting theiterative search from the c<strong>and</strong>idate <strong>Lyapunov</strong> function, the V variant. Since we can notguarantee a feasible starting point <strong>for</strong> either algorithm, the K <strong>and</strong> the V variants can beconsidered two different algorithms.Algorithm 1 (State Feedback: C<strong>and</strong>idate K Variant) An iterative search to satisfythe SOS conditions of Lemma 7 starting from a c<strong>and</strong>idate controller.


42Let i be the iteration index <strong>and</strong> set i = 1. Denote the c<strong>and</strong>idate controller K (i=0) ,<strong>and</strong> pick the maximum degree of the controller <strong>and</strong> <strong>Lyapunov</strong> polynomials, d K<strong>and</strong> d Vrespectively.1. Fix the controller polynomial K = K (i−1) , set l(x) = ‖x‖ d VdV<strong>and</strong> solve the followinglinesearch on γ where V ∈ R n,dV with V (0) = 0maxVγs.t.V − l−(γV + ∇V ∗ (f + gK))∈ Σ n∈ Σ n(3.7)Set V (i) = V . If γ max > 0, then the system (3.6) is semi-globally exponentially stablewith controller K (i−1) , else if −∞ < γ max ≤ 0 goto step 2. If γ max = −∞, theiteration is infeasible from the c<strong>and</strong>idate controller K (i−1) , <strong>and</strong> no stability propertiesof the system (3.6) can be inferred.2. Fix the <strong>Lyapunov</strong> function V = V (i) <strong>and</strong> solve the semidefinite programming problemwhere K ∈ R m n,d Kwith K(0) = 0maxKγs.t.(3.8)−(γV + ∇V ∗ (f + gK)) ∈ Σ nSet K (i) = K. If γ max > 0, then the system (3.6) is semi-globally exponentially stablewith controller K (i) . If γ max ≤ 0, increment i <strong>and</strong> loop back to step 1.If we desire to start from a c<strong>and</strong>idate <strong>Lyapunov</strong> function instead of a c<strong>and</strong>idate


43controller we can follow the V variant algorithm, which is a trivial reordering of the stepsabove, but as noted in the remark after the algorithm, it is subtly different.Algorithm 2 (State Feedback: C<strong>and</strong>idate V Variant) An iterative search to satisfythe SOS conditions of Lemma 7 starting from a positive definite c<strong>and</strong>idate <strong>Lyapunov</strong> function.Let i be the iteration index <strong>and</strong> set i = 1. Denote the c<strong>and</strong>idate <strong>Lyapunov</strong> functionV (i=0) , <strong>and</strong> pick the maximum degree of the controller <strong>and</strong> <strong>Lyapunov</strong> polynomials, d K <strong>and</strong>d Vrespectively.1. Fix the <strong>Lyapunov</strong> function V = V (i−1) <strong>and</strong> solve the semidefinite programming problemwhere K ∈ R m n,d Kwith K(0) = 0maxKγs.t.(3.9)−(γV + ∇V ∗ (f + gK)) ∈ Σ nSet K (i) = K. If γ max > 0, then the system (3.6) is semi-globally exponentially stablewith controller K (i) , if −∞ < γ max ≤ 0 goto to step 2. If γ max = −∞, the iterationis infeasible starting from the c<strong>and</strong>idate <strong>Lyapunov</strong> function V (i−1) , <strong>and</strong> no stabilityproperties of the system (3.6) can be inferred.2. Fix the controller polynomial K = K (i) <strong>and</strong> set l(x) = ‖x‖ d VdV. Solve the following


44linesearch on γ where V ∈ R n,dV with V (0) = 0maxVγs.t.V − l−(γV + ∇V ∗ (f + gK))∈ Σ n∈ Σ n(3.10)Set V (i) = V . If γ max > 0, then the system (3.6) is semi-globally exponentially stablewith controller K (i) , else if γ max ≤ 0, increment i loop back to step 1.Remark 2 (Properties of the State Feedback Algorithms) :• If −∞ < γ max in step 1 of either algorithm, then the rest of the iteration’s searcheswill be feasible, however, this does not mean that a γ max > 0 will necessarily be found.• The degree of the <strong>Lyapunov</strong> function, d Vshould always be picked to be even, since itneeds to be positive definite.• Since deg f ≥ 1, deg(∇V ∗ (f + gK)) ≥ deg V . This implies that the value of d K needsto be picked so that the degree of ∇V ∗ (f +gK) is even to insure that −(γV +∇V ∗ (f +gK)) is SOS.• In general the V variant algorithm tends to be feasible when the K variant is not.A theoretical rational <strong>for</strong> this heuristic is as follows. Given a c<strong>and</strong>idate controller,the “likelihood” that it semi-globally exponentially stabilizes the nonlinear system islow, so unless it does it is impossible to find a <strong>Lyapunov</strong> function.On the otherh<strong>and</strong>, given a positive definite c<strong>and</strong>idate <strong>Lyapunov</strong> function, if the system can be


45semi-globally exponentially stabilized, then finding a controller that does so <strong>and</strong> workswith the c<strong>and</strong>idate <strong>Lyapunov</strong> function is more likely.The polynomial controller design step of the V variant algorithm can also beviewed as treating the c<strong>and</strong>idate <strong>Lyapunov</strong> function as a <strong>for</strong>m of control <strong>Lyapunov</strong> function(CLF), see [17] <strong>for</strong> background. However, our controller design approach contrasts withthe st<strong>and</strong>ard CLF back-stepping procedure by using convex optimization to search <strong>for</strong> anypolynomial controller of fixed degree instead of matching the controller to counteract thesystem’s dynamics.3.4.2 State Feedback Design Example: Non-Holonomic SystemConsider the bilinear system from example 2 in [34],⎡ ⎤ ⎡⎤⎢ẋ 1⎥⎣ ⎦ = 3x 1 + 4x 2⎢⎥⎣⎦ uẋ 2 −20x 1 + 10x 2Clearly this system fits the <strong>for</strong>mat of (3.5) with f = 0, so we can use the algorithmsdesigned in the previous section to design a semi-globally exponentially stabilizing statefeedback controller.Since the system has f = 0, we can not find our c<strong>and</strong>idate V <strong>and</strong> K by solvingthe control design problem on the system’s linearization. We will instead choose to designa controller by starting Algorithm 2 withV (x) = x 2 1 + x 2 2<strong>and</strong> setting d V = d K = 2. From this c<strong>and</strong>idate <strong>Lyapunov</strong> function, we find a controller <strong>and</strong><strong>Lyapunov</strong> function pair that achieves γ max = 0.0078 after 2 iterations. The controller that


46the algorithm designs has both quadratic <strong>and</strong> linear terms, but the linear terms are bothsmaller in magnitude than 10 −6 . If we zero out these tiny terms we find that we still meetthe required SOS conditions. <strong>and</strong> we are left with the reduced controllerK(x) = 1 ()− 0.426x 2 1 + 2.275x 1 x 2 − 1.404x 2 2100With this controller we can not use the SOS conditions in Lemma 5 to find theinduced L 2 gain from disturbances to the system’s states, h(x) = x, since the closed loopsystem is a homogeneous cubic polynomial. This makes ˙V a homogeneous quartic polynomial,which lacks the necessary quadratic terms to counteract the −h(x) ∗ h(x) = −x ∗ xterm, <strong>and</strong> thus makes the SOS conditions always infeasible.3.4.3 State Feedback Design Example: Nonlinear Spring-Mass SystemWe will design a state feedback controller following Algorithms 1 <strong>and</strong> 2 <strong>for</strong> thespring-mass system given below where x 1 , x 3 represent the displacement of m 1 , m 2 respectively,the k’s identify the springs, d is a anti-damper, <strong>and</strong> u is a <strong>for</strong>cing input.k 1✲ u k 2✁❆❆❆✁✁ ✁❆ ❆ ❆ ✁✁❆❆❆✁✁ ✁❆ ❆ ❆ ✁m 1 ✲ x 1m 2 ✲ x 3dFor simplicity we will consider only unit masses, m 1 = m 2 = 1.Let k 1 be astiffening spring that generates the displacement dependent <strong>for</strong>ceF k1 (x) = x 1 + 1 10 x3 1<strong>and</strong> let the <strong>for</strong>ces generated by k 2 <strong>and</strong> d be linear in the relative displacement <strong>and</strong> velocity


47respectively with d chosen to provide negative damping to make the system unstableF k2 = x 3 − x 1F d = − 1 10 (x 4 − x 2 ).We can now write the system’s dynamics as⎡ ⎤ ⎡⎤ ⎡ ⎤ẋ 1x 20ẋ 2−(x 1 + 110 =x3 1 ) + (x 3 − x 1 ) − 110 (x 4 − x 2 )1+u (3.11)ẋ 3x 40⎢ ⎥ ⎢⎥ ⎢ ⎥⎣ ⎦ ⎣⎦ ⎣ ⎦ẋ 4 −(x 3 − x 1 ) + 110 (x 4 − x 2 )0} {{ } }{{}f(x)g(x)The linearization of the system (3.11) has eigenvalues with positive real part, so the uncontrolledsystem is unstable.We want to design a state feedback controller, u = K(x), to semi-globally exponentiallystabilize the systemẋ = f(x) + g(x)uwhere f, g are defined in (3.11). However, we need to start with either a c<strong>and</strong>idate controlleror <strong>Lyapunov</strong> function, <strong>and</strong> we will find both by solving the linearized version of the problemto find a linear controller <strong>and</strong> a quadratic <strong>Lyapunov</strong> function.Letting A f<strong>and</strong> B g be the linearizations of f <strong>and</strong> g about the point x = 0 thelinearized system isẋ = A f x + B g u (3.12)It is possible to use the “feedback trick” from [2] to pose the problem of finding a linearstatic state feedback controller, u = K lin (x) = Kx, <strong>for</strong> system (3.12) with a quadratic


48<strong>Lyapunov</strong> function that demonstrates semi-global exponential stability, V quad (x) = x ∗ P x,as the following linesearch with LMI constraintsmaxγs.t.Q ≻ 0(3.13)−γQ≽ QA ∗ f + A f Q + L ∗ B ∗ g + B g Lwhere P = Q −1 <strong>and</strong> K = LP .Using K lin as found by (3.13) as the c<strong>and</strong>idate controller to start Algorithm 1 withd V = 2 or d V = 4 makes the SOS conditions (3.7) infeasible, thus γ max = −∞. This impliesthat the K variant of the algorithm fails to tell us anything about the possibility of semigloballyexponentially stabilizing the unstable system (3.11). However if we start Algorithm2 with V quad as the c<strong>and</strong>idate <strong>Lyapunov</strong> function <strong>and</strong> fix d K = 3, then a controller is foundin the first optimization (3.9) that makes γ max = 1.2311. However, if we start Algorithm2 with d K = 1, it fails which shows that we need to go to d K = 3 to achieve semi-globalexponential stability.The success of the V variant while the K variant fails is not that strange when weconsider what the SOS optimizations were trying to find. In the K case, we were looking <strong>for</strong>a <strong>Lyapunov</strong> function that demonstrates that the nonlinear system with the linear controllerwrapped around it is semi-global exponentially stable. Whereas, in the V case we wereusing the quadratic V in the manner of a control <strong>Lyapunov</strong> function <strong>and</strong> designing a thirdorder controller to make the necessary SOS condition hold.The controller found in the V case is a degree three polynomial in 4 variables, so ithas 34 terms. Since no part of the optimization tries to reduce the number of nonzero terms,


49all of them are used. If all the terms whose coefficients are less than 10 −6 in absolute valueare set to zero the number of used terms drops to 24 <strong>and</strong> the reduced controller continuesto make the system globally exponentially stable with the same value of γ max .Per<strong>for</strong>mance of the State Feedback <strong>Controller</strong>Now that we have designed a polynomial state feedback controller <strong>for</strong> the examplesystem, we can analyze its per<strong>for</strong>mance by using Lemma 5 to compute a bound on theinduced L 2 gain from disturbances to the states by selecting h(x) = x.If we allow adisturbance to enter the system through the control channel, g w = g, <strong>and</strong> keep d V = 2,then following Lemma 5 we get the following per<strong>for</strong>mance bound‖x(t)‖ 2‖w(t)‖ 2≤ 0.686which shows that the system is non-expansive.3.5 Output Feedback <strong>Controller</strong> DesignWe can exp<strong>and</strong> the results <strong>for</strong> state feedback controllers by allowing the controllerto be a dynamic system <strong>and</strong> by limiting the in<strong>for</strong>mation that it receives. In many cases, theexisting output feedback controller design schemes (ie. back stepping [17]) <strong>for</strong>mulate theproblem by making the <strong>Lyapunov</strong> function depend only on the system’s outputs, however,we will continue to require that the <strong>Lyapunov</strong> function depend on all the system’s <strong>and</strong> thecontroller’s states.Again, we will be searching <strong>for</strong> ways to validate the assumptions of Theorem 9using an iterative procedure to check SOS conditions. As in the state feedback case, we will


50not consider the globally asymptotic stability case, since its set up will again suffer fromthe infeasiblity properties that were illustrated in the previous section.Define the system to be controlled asẋy= f(x) + g(x)u= h(x)(3.14)<strong>for</strong> x ∈ R n with f ∈ R n n, f(0) = 0, u ∈ R m , g ∈ R n×mn with y ∈ R p , h ∈ R p n <strong>and</strong> h(0) = 0.If we allow u to be generated by an unknown n ξ -state dynamic output feedback controllerof the <strong>for</strong>m˙ξu= A(ξ) + B(ξ)y= C(ξ) + D(ξ)y(3.15)<strong>for</strong> ξ ∈ R n ξwith A ∈ R n ξn ξ, A(0) = 0, B ∈ R n ξ×p, C ∈ R m n ξ, C(0) = 0 <strong>and</strong> D ∈ R m×p . Withn ξn ξthis controller structure, the closed loop system becomesẋ˙ξ= f(x) + g(x)(C(ξ) + D(ξ)h(x))= A(ξ) + B(ξ)h(x).(3.16)By the assumptions on f, h, A, C, we know that the combined system has a fixed point at[x; ξ] = [0; 0], <strong>and</strong> we can pose the following analog of Lemma 3 to test the closed loopsystem (3.16) <strong>for</strong> semi-global exponential stability.Lemma 8 Let n ξ be a fixed positive integer, <strong>and</strong> l([x; ξ]) = ‖[x; ξ]‖ d dwith d some integergreater than one. If there exists γ > 0, A ∈ R n ξn ξ, A(0) = 0, B ∈ R n ξ×pn ξ, C ∈ R m n ξ, C(0) = 0,D ∈ R m×pn ξ<strong>and</strong> V ∈ R n+nξ with V (0) = 0 such thatV − l⎛ ⎡⎤⎞− ⎜⎝ γV + ∇V ∗ ⎢f + gC + gDh⎥⎟⎣⎦⎠A + Bh∈ Σ n+nξ∈ Σ n+n ξ


51then the system (3.14) is semi-globally exponentially stabilized by the controller (3.15). Additionally,γ/d is a rate of convergence <strong>for</strong> the closed loop system.As be<strong>for</strong>e this lemma can be proved along the lines of Lemma 2 <strong>and</strong> again it isbilinear in the elements of the controller system <strong>and</strong> the <strong>Lyapunov</strong> function. However, unlikethe earlier lemmas, when the systems are linear <strong>and</strong> the <strong>Lyapunov</strong> function is quadraticwe can not use the “feedback trick” to check the conditions in Lemma 8 with a singlesemidefinite program, although, by the separation theorem, we could design separately anobserver <strong>and</strong> a state feedback controller.3.5.1 Iterative Output Feedback Design AlgorithmsIn their stated <strong>for</strong>ms, the SOS conditions of Lemma 8 do not fit into the set up <strong>for</strong>Theorem 3 since they are bilinear in the controller system <strong>and</strong> the <strong>Lyapunov</strong> function. Toget around this problem we will propose two iterative approaches that are rather similar toAlgorithms 1 <strong>and</strong> 2 that also start from the c<strong>and</strong>idate controller, the A, B, C, D variant, orthe <strong>Lyapunov</strong> function, the V variant. As in the state feedback case, since neither algorithmcan be guaranteed to be feasible the V <strong>and</strong> the A, B, C, D variants can be considered twodifferent algorithms.Algorithm 3 (Output Feedback: C<strong>and</strong>idate A, B, C, D Variant) An iterative searchto satisfy the SOS conditions of Lemma 8 starting from a c<strong>and</strong>idate controller.Let i be the iteration index <strong>and</strong> set i = 1. Denote the c<strong>and</strong>idate controller A (i=0) ,B (i=0) , C (i=0) , D (i=0) , <strong>and</strong> pick the maximum degree of the controller <strong>and</strong> <strong>Lyapunov</strong> polynomials,d A , d B , d C , d D <strong>and</strong> d Vrespectively.


521. Fix the controller polynomials A = A (i−1) , B = B (i−1) , C = C (i−1) , D = D (i−1) , <strong>and</strong>set l([x; ξ]) = ‖[x; ξ]‖ d VdV. Solve the following linesearch on γ where V ∈ R n+nξ ,d VwithV (0) = 0maxVγs.t.V − l ∈ Σ n+nξ(3.17)⎛ ⎡⎤⎞− ⎜⎝ γV + ∇V ∗ ⎢f + gC + gDh⎥⎟⎣⎦⎠ ∈ Σ n+n ξA + BhSet V (i) = V .If γ max > 0, then the system (3.14) is semi-globally exponentiallystable with controller A (i−1) , B (i−1) , C (i−1) , D (i−1) , else if −∞ < γ max ≤ 0 goto step2. If γ max = −∞, the iteration is infeasible starting from the c<strong>and</strong>idate controllerA (i−1) , B (i−1) , C (i−1) , D (i−1) , <strong>and</strong> no stability properties of the system (3.14) can beinferred.2. Fix the <strong>Lyapunov</strong> function V = V (i) , <strong>and</strong> solve the semidefinite programming problemwhere A ∈ R n ξn ξ ,d Awith A(0) = 0, B ∈ R n ξ×pn ξ ,d B, C ∈ R m n ξ ,d C, C(0) = 0, <strong>and</strong> D ∈ R m×pn ξ ,d DmaxA,B,C,Dγs.t.⎛ ⎡⎤⎞(3.18)− ⎜⎝ γV + ∇V ∗ ⎢f + gC + gDh⎥⎟⎣⎦⎠ ∈ Σ n+n ξA + BhSet A (i) = A, B (i) = B, C (i) = C, D (i) = D. If γ max > 0, then the system (3.14)is semi-globally exponentially stable with controller A (i) , B (i) , C (i) , D (i) , if γ max ≤ 0,increment i loop back to step 1.


53As with the state feedback case, if we want to start from a c<strong>and</strong>idate <strong>Lyapunov</strong>function instead of a c<strong>and</strong>idate controller we can reorder the steps above to follow the Vvariant of the algorithm.Algorithm 4 (State Feedback: C<strong>and</strong>idate V Variant) An iterative search to satisfythe SOS conditions of Lemma 8 starting from a positive definite c<strong>and</strong>idate <strong>Lyapunov</strong> function.Let i be the iteration index <strong>and</strong> set i = 1. Denote the c<strong>and</strong>idate <strong>Lyapunov</strong> functionV (i=0) , <strong>and</strong> pick the maximum degree of the controller <strong>and</strong> <strong>Lyapunov</strong> polynomials, d A , d B ,d C , d D <strong>and</strong> d Vrespectively.1. Fix the <strong>Lyapunov</strong> function V = V (i−1) , solve the semidefinite programming problemwhere A ∈ R n ξn ξ ,d Awith A(0) = 0, B ∈ R n ξ×pn ξ ,d B, C ∈ R m n ξ ,d C, C(0) = 0, <strong>and</strong> D ∈ R m×pn ξ ,d DmaxA,B,C,Dγs.t.⎛ ⎡⎤⎞(3.19)− ⎜⎝ γV + ∇V ∗ ⎢f + gC + gDh⎥⎟⎣⎦⎠ ∈ Σ n+n ξA + BhSet A (i) = A, B (i) = B, C (i) = C, D (i) = D. If γ max > 0, then the system (3.14) issemi-globally exponentially stable with controller A (i) , B (i) , C (i) , D (i) , if −∞ < γ max ≤0 goto to step 2. If γ max = −∞, the iteration is infeasible starting from the c<strong>and</strong>idate<strong>Lyapunov</strong> function V , <strong>and</strong> no stability properties of the system (3.14) can be inferred.2. Fix the controller polynomials A = A (i) , B = B (i) , C = C (i) , D = D (i) <strong>and</strong> setl([x; ξ]) = ‖[x; ξ]‖ d VdV. Solve the following linesearch on γ where V ∈ R n+nξ ,d Vwith


54V (0) = 0maxVγs.t.V − l ∈ Σ n+nξ(3.20)⎛ ⎡⎤⎞− ⎜⎝ γV + ∇V ∗ ⎢f + gC + gDh⎥⎟⎣⎦⎠ ∈ Σ n+n ξA + BhSet V (i) = V . If γ max > 0, then the system (3.14) is semi-globally exponentially stablewith controller A (i) , B (i) , C (i) , D (i) , else if γ max ≤ 0, increment i <strong>and</strong> loop back to step1.Remark 3 (Properties of the Output Feedback Algorithms) :• If −∞ < γ max in step 1 of either algorithm, then the rest of the iteration’s searcheswill be feasible, however, this does not mean that a γ max > 0 will necessarily be found.• The degree of the <strong>Lyapunov</strong> function, d Vshould always be picked to be even, since itneeds to be positive definite.• Since deg f ≥ 1,⎛ ⎡⎤⎞deg ⎜⎝ ∇V ∗ ⎢f + gC + gDh⎥⎟⎣⎦⎠ ≥ deg VA + BhThis implies that the values of d A , d B , d C , d D need to be chosen so that the degree of∇V ∗ [f + gC + gDh; A + Bh] is even. If it were odd, then the SOS conditions required<strong>for</strong> stability could never be satisfied.• In the output feedback case we can not employ the “feedback trick” so we can notstart the algorithm from its linear analog. Due to this, we must find other c<strong>and</strong>idate


55controller <strong>and</strong> <strong>Lyapunov</strong> polynomials, <strong>and</strong> the selection of these greatly influences thefeasibility of the two algorithm variants. If a good controller starting point can found,then Algorithm 3 will tend to be feasible more often than Algorithm 4. Conversely, ifa good c<strong>and</strong>idate <strong>Lyapunov</strong> function can be found, then the V variant will tend to befeasible more often.3.5.2 Output Feedback Design ExampleConsider again the spring mass system used in §3.4.3, whose dynamics are givenby (3.11). If we make the system’s output⎡ ⎤y = h(x) = ⎢x 1⎥⎣ ⎦x 2the problem fits the <strong>for</strong>m of (3.14), so we can try to design an output feedback controller<strong>for</strong> the system using Algorithms 3 <strong>and</strong> 4.First, we need to find a c<strong>and</strong>idate controller <strong>and</strong> a c<strong>and</strong>idate <strong>Lyapunov</strong> functionso that we can compare the A, B, C, D <strong>and</strong> the V variants. Unlike the state feedback case,we can not just linearize the problem <strong>and</strong> solve the resulting LMIs, so we will first constructa robust linear controller <strong>and</strong> its <strong>Lyapunov</strong> function to start the algorithms.We choose to start the algorithms with a robust linear controller in the hopethat its robustness will compensate <strong>for</strong> the system’s nonlinearities which can be viewed asuncertainties. In this light, we will maximize our robustness against plant uncertainty bypicking the c<strong>and</strong>idate controller (A, B, C, D) to minimize the H ∞ gain from d to e <strong>for</strong> theblock diagram shown in Figure 3.1, which will give optimal robustness as measured by thegap metric [10].


56e 1✻d 1 ✲ ❝ ✲✻A fC h 0B g✲ e 2ACBD✛❄❝ ✛ d 2Figure 3.1: The Gap Metric Block DiagramIf n ξ = n, we can use st<strong>and</strong>ard software to find the H ∞ controller (A, B, C, D)that minimizes the gain from d to e in the block diagram above. If this controller achievesa gain from d to e of β then if we denote the closed loop system A c , B c , C c , D c the positivedefinite matrix X that makes⎡⎤A ∗ cX + XA c XB c Cc∗ Bc ∗ X −βI Dc∗ ≺ 0 (3.21)⎢⎥⎣⎦C c D c −βIprovides a quadratic <strong>Lyapunov</strong> function [x; ξ] ∗ X[x; ξ] to use as a c<strong>and</strong>idate <strong>Lyapunov</strong> function.If n ξ ≠ n, then we can devise a c<strong>and</strong>idate controller <strong>and</strong> <strong>Lyapunov</strong> function as follows.First find a r<strong>and</strong>om n ξ state controller that stabilizes the linearized system (A f , B g , C h , 0).Since the closed loop system (A c , B c , C c , D c ) is linear in the controller (A, B, C, D), we canstart with the r<strong>and</strong>om stabilizing controller <strong>and</strong> iterate between adjusting the controller<strong>and</strong> the <strong>Lyapunov</strong> function to minimize β subject to X ≻ 0 <strong>and</strong> the LMI constraint (3.21)With the c<strong>and</strong>idate controller <strong>and</strong> <strong>Lyapunov</strong> function found by the above methodwe can use Algorithms 3 <strong>and</strong> 4. If we first search <strong>for</strong> a full order controller, n ξ = 4, we


57can find a linear controller, d A = d C = 1 <strong>and</strong> d B = d D = 0, that semi-globally exponentiallystabilizes the nonlinear system using both the V <strong>and</strong> the A, B, C, D variants of thealgorithm, as shown by Table 3.2.A, B, C, D variant V variantV (1) , A (0) , B (0) , C (0) , D (0) γ = −8.88 V (0) , A (1) , B (1) , C (1) , D (1) γ = −28.63V (1) , A (1) , B (1) , C (1) , D (1) γ = −6.41 V (1) , A (1) , B (1) , C (1) , D (1) γ = −12.46V (2) , A (1) , B (1) , C (1) , D (1) γ = −0.09 V (1) , A (2) , B (2) , C (2) , D (2) γ = −3.08V (2) , A (2) , B (2) , C (2) , D (2) γ = +0.12 V (2) , A (2) , B (2) , C (2) , D (2) γ = −0.09– V (2) , A (3) , B (3) , C (3) , D (3) γ = +0.06Table 3.2: The results of Algorithms 3&4 on (3.11) with h = [x 1 ; x 2 ] <strong>and</strong> n ξ = 4.If we reduce the number of controller states so that n ξ = 2, we can still find a linearcontroller that semi-globally exponentially stabilizes the system. However, only Algorithm3, the A, B, C, D variant, works while the V variant is infeasible. The final two state linearcontroller <strong>and</strong> closed loop <strong>Lyapunov</strong> function are below⎡⎡ ⎤1.37 −225.10 −240.10 −13.94A B⎢ ⎥⎣ ⎦ = 190.19 −2040.73 −2119.78 −122.79⎢C D ⎣715.60 3009.76 −3654.87 −3778.65⎤⎥⎦<strong>and</strong>⎡X =⎢⎣⎤2.67 −0.17 −0.69 0.21 0.65 −0.59−0.17 1.39 −0.34 0.39 1.45 2.56−0.69 −0.34 1.17 −0.23 0.07 −0.480.21 0.39 −0.23 1.35 −0.09 0.560.65 1.45 0.07 −0.09 93.84 88.98⎥⎦−0.59 2.56 −0.48 0.56 88.98 87.15


58while the progress of the iteration is shown in Table 3.3A, B, C, D variant n ξ = 2V (1) , A (0) , B (0) , C (0) , D (0) γ = −9.57V (1) , A (1) , B (1) , C (1) , D (1) γ = −7.62V (2) , A (1) , B (1) , C (1) , D (1) γ = −0.10V (2) , A (2) , B (2) , C (2) , D (2) γ = −0.10V (3) , A (2) , B (2) , C (2) , D (2) γ = −0.09V (3) , A (3) , B (3) , C (3) , D (3) γ = +1.45Table 3.3: Progress of Algorithm 3 on (3.11) with h = [x 1 ; x 2 ] <strong>and</strong> n ξ = 2.However, if we reduce the number of controller states again by setting n ξ = 1, wecan not find a linear controller to semi-globally exponentially stabilize the system.Also, if we set h = [x 3 ; x 4 ], which separates the control <strong>and</strong> the observations,both of the algorithms turn out to be infeasible <strong>for</strong> the full order controller case when thecontroller is allowed to be linear or cubic with a quadratic or quartic <strong>Lyapunov</strong> function.Per<strong>for</strong>mance of the Output Feedback <strong>Controller</strong>sAs with the state feedback design example, we can now look at the per<strong>for</strong>manceof the controllers derived above, by bounding the system’s induced L 2 gain. We will againallow the disturbance to enter in the control channel making g w = g. However, we foundthat, in these examples, using the <strong>Lyapunov</strong> function that the controller design algorithmsreturned found the smallest values <strong>for</strong> √ α, the bound on the induced L 2 gain from wto y.The results are presented in Table 3.4, which shows that the per<strong>for</strong>mance of thecontroller designed by the V variant algorithm is vastly worse in this metric than either ofthe A, B, C, D variant controllers. It is interesting to note that the achieved values <strong>for</strong> γ max


59seem to have little or no relation to the per<strong>for</strong>mance bound. However, as expected, the fullorder controller exhibits better per<strong>for</strong>mance than the reduced order controller.Algorithm n ξ√ α4 4 45.6063 4 0.0433 2 0.111Table 3.4: Per<strong>for</strong>mance of the controllers designed in the example.3.6 Chapter SummaryIn this chapter we investigated convex optimization algorithms to construct <strong>Lyapunov</strong>functions that demonstrate either global asymptotic or semi-global exponential stability.As a measure of a system’s per<strong>for</strong>mance, we provided a convex optimization procedureto bound the induced L 2 gain from disturbances to output signals.We then applied the semi-global exponential stability results to derive iterative algorithmsto design both state <strong>and</strong> output feedback controllers. These synthesis approacheswere tested on examples <strong>and</strong> the resulting controllers’ per<strong>for</strong>mance were bounded by computinga bound on the system’s induced L 2 gain from a disturbance in the control channelto the system’s output.


60Chapter 4Local Stability <strong>and</strong> <strong>Controller</strong><strong>Synthesis</strong>If we again consider the system (3.1)ẋ = f(x)<strong>for</strong> x ∈ R n with f ∈ R n n as well as f(0) = 0, we can pose local system theoretic questionsas searches <strong>for</strong> SOS polynomials in addition to the global ones considered in the previouschapter. Again, we will first make <strong>Lyapunov</strong> stability arguments <strong>and</strong> adapt them into SOSprogramming problems. Additionally, we will continue to use the shorth<strong>and</strong> ˙V := ∇V ∗ f.4.1 Local Stability BackgroundUsing the stability definitions from §3.1.1 we can state the basic local stability<strong>Lyapunov</strong> function result based on [16, Theorem 3.1].


61Theorem 10 Let D ⊂ R n be a domain containing the equilibrium point x = 0 of the system(3.1). Let V : D → R be a continuously differentiable function such thatV (0) = 0V (x) > 0 on D \ {0}<strong>and</strong>˙V := ∇V ∗ f < 0 on D \ {0}then the system (3.1) is asymptotically stable about x = 0. Moreover, any region Ω β :={x ∈ R n |V (x) ≤ β} such that Ω β ⊆ D describes an positively invariant region contained inthe equilibrium point’s domain of attraction.Proof:Given ɛ > 0, choose r ∈ (0, ɛ) such thatB r := {x ∈ R n |‖x‖ ≤ r} ⊂ DLet a = min ‖x‖=r V (x). Then, since V (x) > 0 <strong>for</strong> x ≠ 0, a > 0. Take b ∈ (0, a) <strong>and</strong> letΩ b := {x ∈ B r |V (x) ≤ b}. Since 0 < b < a, the containment Ω b ⊂ B r holds. Since Ω b ⊂ D,we know that ˙V (x) ≤ 0 on all of Ω b . Thus, <strong>for</strong> x 0 ∈ Ω bV (φ T (x 0 )) ≤ V (x 0 ) ≤ b<strong>for</strong> all T ≥ 0. Since V is continuous <strong>and</strong> V (0) = 0 there must exist δ ɛ such thatB δɛ := {x ∈ R n |‖x‖ ≤ δ ɛ } ⊂ Ω b ⊂ B rEnd to end, we now have,x 0 ∈ B δɛ ⇒ x 0 ∈ Ω b ⇒ φ T (x 0 ) ∈ Ω b ⇒ φ T (x 0 ) ∈ B r


62thus ‖x 0 ‖ < δ ɛ implies ‖φ T (x 0 )‖ < ɛ <strong>for</strong> all T ≥ 0, proving stability.To prove asymptotic stability, we need to show that every initial condition x 0 ∈ B rconverges to the origin, which is that <strong>for</strong> every c > 0 there exists a T c > 0 such that‖φ t (x 0 )‖ < c <strong>for</strong> all t > T c . From earlier we know that <strong>for</strong> every c > 0 there exists d > 0such that Ω d ⊂ B c , which makes it sufficient to show that V (φ t (x 0 )) → 0 as t → ∞. SinceV (φ t (x 0 )) is monotonically decreasing <strong>and</strong> bounded below by zero,V (φ t (x 0 )) → v ≥ 0as t → ∞. To show that v = 0, suppose that v > 0. By continuity of V there exists w > 0such that B w ⊂ Ω v . The limit V (φ t (x 0 )) → v > 0 implies that the state trajectory mustremain outside of B w <strong>for</strong> all time. Let −α = max w≤‖x‖≤r ˙V (x), which exists because ˙V iscontinuous over this compact set. Clearly −α < 0. Integrating ˙V from any x 0 ∈ B r wehave∫ tV (φ t (x 0 )) = V (x 0 ) +0˙V (φ τ (x 0 )) dτ ≤ V (x 0 ) − αtThe right h<strong>and</strong> side will eventually become negative, contradicting the assumption thatv > 0.Consider now the set Ω β .Since Ω β ⊂ D, if x ∈ Ω β \ {0} then V (x) > 0 <strong>and</strong>˙V (x) < 0.This shows, following the arguments above, that V (φ t (x)) → 0 <strong>and</strong> thus‖φ t (x)‖ → 0 as t → ∞ <strong>for</strong> any x ∈ Ω β \ {0}. It follows that Ω β is contained in theequilibrium point’s domain of attraction.✷


634.2 Convex Stability TestsUsing the <strong>Lyapunov</strong> conditions <strong>for</strong> local asymptotic stability in Theorem 10 wecan construct two SOS programming based algorithms to prove a fixed point’s stability.Additionally, these algorithms can be used to find <strong>and</strong> maximize the size of certain invariantsubsets of its region of attraction. Unlike Zubov’s theorem, [11], which finds the exact regionof attraction, our approach allows us to use convex optimization to begin to underst<strong>and</strong> thesize <strong>and</strong> shape of the region of attraction by finding invariant subsets.The first algorithm presented below works to find the largest estimate of the fixedpoint’s region of attraction by exp<strong>and</strong>ing the set D <strong>and</strong> then finding the largest level set ofthe resulting <strong>Lyapunov</strong> function that is contained in D, which in line with Theorem 10 isinvariant <strong>and</strong> is contained in the region of attraction. The second algorithm approaches theproblem from somewhat of a converse point of view. It exp<strong>and</strong>s a region that is containedin a level set of the <strong>Lyapunov</strong> function on which all of the <strong>Lyapunov</strong> conditions hold.4.2.1 Exp<strong>and</strong>ing D AlgorithmTo fit the assumptions of Theorem 10 into an SOS programming framework wewill first restrict V ∈ R n with V (0) = 0, as in the global case. Additionally we will describeD with a semi-algebraic setD := {x ∈ R n |p(x) ≤ β}


64with p ∈ Σ n , positive definite, <strong>and</strong> β ≥ 0 to insure that D is connected <strong>and</strong> contains theorigin. Now the requirements of Theorem 10 <strong>for</strong> asymptotic stability become{x ∈ R n |p(x) ≤ β} \ {0} ⊆ {x ∈ R n | ˙V (x) < 0}{x ∈ R n |p(x) ≤ β} \ {0} ⊆ {x ∈ R n |V (x) > 0}If we can find a V to satisfy these conditions <strong>for</strong> a fixed p <strong>and</strong> any value of β > 0, then thesystem (3.1) is asymptotically stable about the fixed point x = 0. However, using this setup, the largest invariant set we can demonstrate that converges to the origin is the largestlevel set of V that is contained in D. In order to find this largest estimate of the region ofattraction, we will satisfy the <strong>Lyapunov</strong> conditions above <strong>for</strong> the largest D by fixing p <strong>and</strong>maximizing β subject to the <strong>Lyapunov</strong> conditions. In this approach, the level sets of p givethe shape of the regions over which we will be checking the <strong>Lyapunov</strong> conditions.We pose the following optimization to search <strong>for</strong> V using the set emptiness <strong>for</strong>mof the set containment constraints abovemaxV ∈R n,V (0)=0βs.t.{x ∈ R n |p(x) ≤ β, x ≠ 0, V (x) ≤ 0} = φ{x ∈ R n |p(x) ≤ β, x ≠ 0, ˙V (x) ≥ 0} = φThese conditions are not yet semi-algebraic as they contain the non-polynomial constraintx ≠ 0. We can get around this problem by using l 1 , l 2 ∈ Σ n , positive definite, <strong>and</strong> replacing


65x ≠ 0 with l(x) ≠ 0 to getmaxV ∈R n,V (0)=0βs.t.{x ∈ R n |p(x) ≤ β, l 1 (x) ≠ 0, V (x) ≤ 0} = φ{x ∈ R n |p(x) ≤ β, l 2 (x) ≠ 0, ˙V (x) ≥ 0} = φInvoking the Positivstellensatz (Theorem 4), we can rewrite the constraints to <strong>for</strong>m theequivalent optimizationmaxV ∈R n,V (0)=0βs.t.s 1 + (β − p)s 2 − V s 3 − V (β − p)s 4 + l 2k 11 = 0s 5 + (β − p)s 6 + ˙V s 7 + ˙V (β − p)s 8 + l 2k 22 = 0where s 1 , . . . , s 8 ∈ Σ n , k 1 , k 2 ∈ Z + <strong>and</strong> f, p are given. These constraints can not be checkedwith SOS programming in this general <strong>for</strong>m. However, if we specify convenient values <strong>for</strong>the k’s, the s’s <strong>and</strong> fix p we can use an iterative SOS programming approach to find the<strong>Lyapunov</strong> function that maximizes the size of D.To limit the degree of the problem, we pick k 1 = k 2 = 1.Additionally, sincethe product of two SOS polynomials is SOS, we can further simplify the problem by byreplacing s 1 , . . . , s 4 with s 1 l 1 , . . . , s 4 l 1 as well as s 5 , . . . , s 8 with s 5 l 2 , . . . , s 8 l 2 . We can then


66factor out the l terms to get the following optimization with SOS constraintsmaxV ∈R n,V (0)=0,s i ∈Σ nβs.t.(4.1)−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1 ∈ Σ nIf this optimization is feasible, then V shows the system to be asymptotically stable withthe <strong>Lyapunov</strong> requirements holding on all of D. However, these constraints are bilinearin V <strong>and</strong> the s’s, so we will have to solve the problem iteratively. An algorithm to solvethe optimization (4.1) <strong>and</strong> find the largest estimate of the fixed point’s region of attractionwithin D is given in Algorithm 5 which follows the discussion of finding the largest invariantset contained in DHaving found the largest D = {x ∈ R n |p(x) ≤ β} over which the <strong>Lyapunov</strong>conditions hold we can now <strong>for</strong>mulate a search <strong>for</strong> the largest level set of V which it containsasmaxcs.t.{x ∈ R n |V (x) ≤ c} ⊆ {x ∈ R n |p(x) ≤ β}where as opposed to (4.1) both β <strong>and</strong> V are fixed. We begin the trans<strong>for</strong>mation to an SOSprogramming problem by <strong>for</strong>ming empty set constraint versionmaxcs.t.{x ∈ R n |V (x) ≤ c, p(x) > β} = φ


67which is equivalent tomaxcs.t.{x ∈ R n |V (x) ≤ c, p(x) ≥ β, p(x) ≠ β} = φNow we use the Positivstellensatz to trans<strong>for</strong>m the constraint, <strong>and</strong> pick k = 1 <strong>for</strong> the monoidto get the SOS programming problemmaxŝ 2 ,ŝ 3 ,ŝ 4 ∈Σ ns.t.(4.2)−ŝ 2 (c − V ) − ŝ 3 (p − β) − ŝ 4 (p − β)(c − V ) − (p − β) 2 ∈ Σ nwhich can be solved with a bisection on c.We can now combine the optimization to maximize the size of D (4.1) with the levelset maximization (4.2) to <strong>for</strong>m the following algorithm <strong>for</strong> estimating the asymptoticallystable fixed point’s domain of attraction.cAlgorithm 5 (Exp<strong>and</strong>ing D) An iterative search to exp<strong>and</strong> the region D in Theorem 10starting from a c<strong>and</strong>idate <strong>Lyapunov</strong> function.Let i be the iteration index <strong>and</strong> set i = 1. Denote the c<strong>and</strong>idate <strong>Lyapunov</strong> functionV (i=0) <strong>and</strong> pick the maximum degree of the <strong>Lyapunov</strong> function, the SOS multipliers <strong>and</strong> thel polynomials to be d V , d s2 , d s3 , d s4 , d s6 , d s7 , d s8 <strong>and</strong> d l1 , d l2 respectively. Pick the maximumdegrees <strong>for</strong> the the level set maximization problem (4.2) <strong>and</strong> denote them dŝ2 , dŝ3 <strong>and</strong> dŝ4 .Fix l k (x) = ɛ ∑ nj=1 xd l kj<strong>for</strong> k = 1, 2 <strong>and</strong> some ɛ > 0. Additionally set β (i=0) = 0.1. Set V = V (i−1) <strong>and</strong> solve the linesearch on β where s 2 ∈ Σ n,ds2 , s 3 ∈ Σ n,ds3 , s 4 ∈


68Σ n,ds4 , s 6 ∈ Σ n,ds6 , s 7 ∈ Σ n,ds7 <strong>and</strong> s 8 ∈ Σ n,ds8maxs 2 ,s 3 ,s 4 ,s 6 ,s 7 ,s 8β(4.3)−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1 ∈ Σ nSet s (i)3 = s 3 , s (i)4 = s 4 , s (i)7 = s 7 <strong>and</strong> s (i)8 = s 8 . Continue to step 2.2. Set s 3 = s (i)3 , s 4 = s (i)4 , s 7 = s (i)7 <strong>and</strong> s 8 = s (i)8 . Solve the linesearch on β wheres.t.V ∈ R n,dV with V (0) = 0, s 2 ∈ Σ n,ds2 <strong>and</strong> s 6 ∈ Σ n,ds6maxV,s 2 ,s 6β(4.4)−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1 ∈ Σ nSet β (i) = β <strong>and</strong> V (i) = V . If β (i) − β (i−1) is less than a specified tolerance go to step3, else increment i <strong>and</strong> go to step 1.s.t.3. Set V = V (i) <strong>and</strong> β = β (i) . Solve the linesearch on c where ŝ 2 ∈ Σ n,dŝ2 , ŝ 3 ∈ Σ n,dŝ3<strong>and</strong> ŝ 4 ∈ Σ n,dŝ4maxŝ 2 ,ŝ 3 ,ŝ 4cs.t.−ŝ 2 (c − V ) − ŝ 3 (p − β) − ŝ 4 (p − β)(c − V ) − (p − β) 2 ∈ Σ nSet c max = c. The set {x ∈ R n |V (x) ≤ c max } is the resulting estimate of the regionof attraction, <strong>for</strong> it is positively invariant, contained within D <strong>and</strong> all of its pointsconverge to the fixed point x = 0.


69Remark 4 (Properties of the exp<strong>and</strong>ing D algorithm) :• If the algorithm is started from a feasible point, which is a c<strong>and</strong>idate <strong>Lyapunov</strong> functionsuch that there exist s i ’s that make (4.1) feasible, then the algorithm will always remainfeasible. If the system’s linearization is stable, then the linearization’s quadratic<strong>Lyapunov</strong> function will always work, since it will stabilize the nonlinear system nearto the origin.• Note that V need not be positive definite, since it is required to be positive only onD \ {0}, which is the first constraint in (4.1). However, to be positive over a regionabout the origin V must have no linear terms <strong>and</strong> d Vshould be even. Clearly if Vwere chosen to be positive definite, then this constraint can just become V − l 1 ∈ Σ n .• Since the s’s, l’s <strong>and</strong> the ŝ’s are SOS, they must be of even degree. Additionally, thedegrees need to be chosen so that following relations hold:For the first SOS constraintFor the second SOS constraintmax{deg(ps 2 ), deg(V s 3 )} ≥ d l1max{deg(ps 2 ), deg(V s 3 )} ≥ deg(V ps 4 )For the c maximization constraintdeg(ps 6 ) ≥ d l2deg(ps 6 ) ≥ max{deg( ˙V s 7 ), deg( ˙V ps 8 )}max{deg(ŝ 2 V ), deg(ŝ 4 pV )} ≥ deg(p 2 )max{deg(ŝ 2 V ), deg(ŝ 4 pV )} ≥ deg(ŝ 3 p)


704.2.2 Exp<strong>and</strong>ing Interior AlgorithmThe previous algorithm looks to Theorem 10 <strong>and</strong> searches <strong>for</strong> estimates of theregion of attraction by enlarging D. However, since D is described by the level sets of p<strong>and</strong> the estimate of the fixed point’s region of attraction is the largest level set of V that iscontained in D, it is possible to have a large D that contains a much smaller largest levelset of V . This algorithm takes the other approach by requiring that the set whose sizeis maximized is contained within an invariant region. This algorithm originally appearedin the context of state feedback controller design in [13] which also appears in a modifiedversion in §4.4.2.Consider again the system (3.1). If we define a variable sized regionP β := {x ∈ R n |p(x) ≤ β}<strong>for</strong> p ∈ Σ n <strong>and</strong> positive definite, we can estimate the region of attraction by maximizing βsubject to the constraint that all of the points in P β converge to the origin under the flowof the system. Following Theorem 10, if we defineD := {x ∈ R n |V (x) ≤ 1}<strong>for</strong> some as of yet unknown c<strong>and</strong>idate <strong>Lyapunov</strong> function, then P β must be contained D.Additionally <strong>for</strong> the theorem’s other assumptions{x ∈ R n |V (x) ≤ 1} \ {0} ⊆ {x ∈ R n | ˙V (x) < 0}<strong>and</strong>, since V <strong>and</strong> thus D is unknown, the only effective way to ensure that V is positive onD \ {0} it to require that V be positive everywhere away from x = 0.


71The problem of finding the best estimate of the region of attraction in this frameworkcan be written with set emptiness constraints asmaxV ∈R n,V (0)=0βs.t.{x ∈ R n |V (x) ≤ 0, x ≠ 0} = φ(4.5){x ∈ R n |p(x) ≤ β, V (x) ≥ 1, V (x) ≠ 1} = φ{x ∈ R n |V (x) ≤ 1, ˙V (x) ≥ 0, x ≠ 0} = φwhich, as with the exp<strong>and</strong>ing D version, is not semi-algebraic due to the x ≠ 0 constraints.If we use the same work around as be<strong>for</strong>e by replacing these constraints with l 1 (x) ≠ 0 <strong>and</strong>l 2 (x) ≠ 0 with l 1 , l 2 ∈ Σ n , positive definite, we havemaxV ∈R n,V (0)=0βs.t.{x ∈ R n |V (x) ≤ 0, l 1 (x) ≠ 0} = φ(4.6){x ∈ R n |p(x) ≤ β, V (x) ≥ 1, V (x) ≠ 1} = φ{x ∈ R n |V (x) ≤ 1, ˙V (x) ≥ 0, l 1 (x) ≠ 0} = φApplying the Positivstellensatz (Theorem 4) the optimization becomesmaxβV ∈ R n, V (0) = 0, k 1 , k 2 , k 3 ∈ Z +s 1 , . . . , s 10 ∈ Σ ns.t.s 1 − V s 2 + l 2k 11 = 0(4.7)s 3 + (β − p)s 4 + (V − 1)s 5 + (β − p)(V − 1)s 6 + (V − 1) 2k 2= 0s 7 + (1 − V )s 8 + ˙V s 9 + (1 − V ) ˙V s 10 + l 2k 32 = 0


72which is not amenable to SOS programming unless we pick convenient values <strong>for</strong> some ofthe s’s <strong>and</strong> the k’s. To keep the degree of the problem down, we pick k 1 = k 2 = k 3 = 1.To simplify the first constraint, we pick s 2 = l 1 <strong>and</strong> factor l 1 out of s 1 . The secondconstraint has the term (V −1) 2 which is quadratic in the coefficients of V <strong>and</strong> thus inhibitsoptimization over V , so set s 3 = s 4 = 0 <strong>and</strong> factor out a (V − 1) term. The third constrainthas the term (1 − V ) ˙V s 10 which is quadratic in the coefficients of V so we set s 10 = 0 <strong>and</strong>factor out l 2 . These selections allow us to write the optimization in a <strong>for</strong>m that is suitable<strong>for</strong> an iterative algorithm.maxV ∈ R n, V (0) = 0βs 6 , s 8 , s 9 ∈ Σ ns.t.∈ Σ n(− (1 − V )s 8 + ˙V)s 9 + l 2(4.8)V − l 1 ∈ Σ n− ( (β − p)s 6 + (V − 1) ) ∈ Σ nWe now can propose an algorithm to find an estimate of the region of attraction<strong>for</strong> the fixed point, x = 0 of the system (3.1) by iteratively solving the SOS constrainedoptimization (4.8).Algorithm 6 (Exp<strong>and</strong>ing Interior) An iterative search to exp<strong>and</strong> the region P β <strong>and</strong> thusthe region D := {x ∈ R n |V (x) ≤ 1} in Theorem 10 starting from a positive definite c<strong>and</strong>idate<strong>Lyapunov</strong> function.Let i be the iteration index <strong>and</strong> set i = 1. Denote the c<strong>and</strong>idate <strong>Lyapunov</strong> functionV (i=0) <strong>and</strong> pick the maximum degree of the <strong>Lyapunov</strong> function, the SOS multipliers <strong>and</strong>


73the l polynomials to be d V , d s6 , d s8 , d s9 <strong>and</strong> d l1 , d l2 respectively. Fix l k (x) = ɛ ∑ nj=1 xd l kjk = 1, 2 <strong>and</strong> some ɛ > 0. Additionally set β (i=0) = 0.<strong>for</strong>1. Set V = V (i−1) <strong>and</strong> solve the linesearch on β where s 6 ∈ Σ n,ds6 , s 8 ∈ Σ n,ds8 <strong>and</strong>s 9 ∈ Σ n,ds9maxs 6 ,s 8 ,s 9βSet s (i)8 = s 8 <strong>and</strong> s (i)9 = s 9 . Continue to step 2.s.t.− ( (β − p)s 6 + (V − 1) ) (4.9)∈ Σ n(− (1 − V )s 8 + ˙V)s 9 + l 2 ∈ Σ n2. Set s 8 = s (i)8 <strong>and</strong> s 9 = s (i)9 . Solve the linesearch on β where V ∈ R n,d Vwith V (0) = 0<strong>and</strong> s 6 ∈ Σ n,ds6maxV,s 6βs.t.V − l 1 ∈ Σ n(4.10)− ( (β − p)s 6 + (V − 1) ) ∈ Σ n(− (1 − V )s 8 + ˙V)s 9 + l 2 ∈ Σ nSet β (i) = β <strong>and</strong> V (i) = V . If β (i) − β (i−1) is less than a specified tolerance goto step3, else increment i <strong>and</strong> go to step 1.3. The set {x ∈ R n |V (i) (x) ≤ 1} contains P β (i) <strong>and</strong> is the largest estimate of the fixedpoint’s region of attraction.Remark 5 (Properties of the exp<strong>and</strong>ing interior algorithm) :


74• If the algorithm is started from a feasible point, which is a <strong>Lyapunov</strong> function suchthat there exist s i ’s that make (4.8) feasible, then the algorithm will always remainfeasible. If the system’s linearization is stable, then the linearization’s quadratic <strong>Lyapunov</strong>function will always work, since it will stabilize the nonlinear system near tothe origin.• Note that, unlike Algorithm 5, V must be positive definite, so it should have no linearterms <strong>and</strong> d Vshould be even.• Since the s’s, <strong>and</strong> the l’s are SOS, they must be of even degree.Additionally, thedegrees need to be chosen so that following relations hold:For the first SOS constraintd V = d l1For the second SOS constraintdeg(ps 6 ) ≥ d VFor the third SOS constraintdeg(V s 8 ) ≥ deg( ˙V s 9 )deg(V s 8 ) = d l24.2.3 Estimating the Region of Attraction ExampleTo compare the effectiveness of Algorithm 5 (Exp<strong>and</strong>ing D) <strong>and</strong> Algorithm 6(Exp<strong>and</strong>ing Interior) consider estimating the region of attraction <strong>for</strong> the following system,a damped pendulum where the sine term has been replaced with the first two terms of its


75Taylor series expansionẋ 1 = x 2ẋ 2 = − 810 x 2 −(x 1 − x3 16) (4.11)The system has three equilibrium points: (0, 0) <strong>and</strong> (± √ 6, 0). Looking at the linearizationsof the system about each equilibrium point we find, as expected, that the first is a stablesink while the other are saddles.To use either Algorithm 5 or 6 we need to pick p <strong>and</strong> find a suitable startingc<strong>and</strong>idate <strong>Lyapunov</strong> function. In order to demonstrate the way the two algorithms work,we will pick a completely uninspired polynomial to exp<strong>and</strong> its level sets:p(x) = x 2 1 + x 2 2Additionally we will pick the uninspired c<strong>and</strong>idate <strong>Lyapunov</strong> function V (x) = x ∗ P x whereP comes from the LMI feasibility problemP ≻ 0A ∗ f P + P A f ≺ 0with A f the linearization of the system (4.11). Solving this feasibility problem, we find⎡⎤P = ⎢⎥⎣⎦25.33 81.98which we will use to <strong>for</strong>m the c<strong>and</strong>idate <strong>Lyapunov</strong> function.Setting the both algorithm’s stopping tolerance to β (i) − β (i−1) = .01 <strong>and</strong> settingthe degrees as follows we can compare the two algorithms. For the exp<strong>and</strong>ing D algorithm,set d V = 2, d s2 = d s3 = d s6 = 2, d s4 = d s7 = d s8 = 0 <strong>and</strong> d l1 = d l2 = 4. For the exp<strong>and</strong>inginterior algorithm set d V = 2, d s8 = 2, d s6 = d s9 = 0, d l1 = 2 <strong>and</strong> d l2 = 4.


76We can see the progress of the two algorithms in Table 4.1 which shows the radiusof the region being exp<strong>and</strong>ed ( √ β in both cases). Note that, D contains the estimate ofthe region of attraction while the estimate contains P β .Iteration Index D Radius P β Radius1 2.42 0.852 2.42 1.163 - 1.594 - 1.845 - 2.056 - 2.05Table 4.1: Results of applying Algorithms 5 <strong>and</strong> 6 to (4.11).The results of running both algorithms are shown in Figure 4.1. The large dotsindicate the system’s fixed points <strong>and</strong> the thin lines connecting them are the system’s stable<strong>and</strong> unstable manifolds. The dashed lines show the maximal D from the exp<strong>and</strong>ing D algorithm<strong>and</strong> the maximal P β from the exp<strong>and</strong>ing interior algorithm. The thick-lined ellipsesshow the two estimates of the region of attraction; the smaller ellipse that is contained inD comes from Algorithm 5, while the larger ellipse that contains P β comes from Algorithm6.Even though the <strong>Lyapunov</strong> function is quadratic <strong>and</strong> the SOS multipliers are oflow degree, we can look at Figure 4.1 <strong>and</strong> tell that, <strong>for</strong> both algorithms, the largest value ofβ possible has been found <strong>for</strong> the given p. In the exp<strong>and</strong>ing D case, the region D borders thesystem’s saddle points where f, <strong>and</strong> thus ˙V , is zero. Where as, in the exp<strong>and</strong>ing interiorcase, P β comes right up to two of the system’s stable manifolds, by which the region ofattraction must be bounded.


7743P β2D1x 20−1−2−3−4−4 −3 −2 −1 0 1 2 3 4x 1Figure 4.1: Phase plot of the system (4.11) with the largest D <strong>and</strong> P β as well as theirellipsoidal estimates of the region of attraction that they respectively circumscribe <strong>and</strong>inscribe.Note that in Figure 4.1 the region D crosses the two of the system’s stable manifolds.Clearly the region of attraction can not cross any of the system’s stable or unstablemanifolds, so all of the area of D outside the manifolds can not be part of the estimate ofthe region of attraction. This skews the <strong>Lyapunov</strong> function that Algorithm 5 designs, sinceit is exp<strong>and</strong>ing D over areas where the level sets can not follow. Additionally, it seemsunrealistic to assume that, <strong>for</strong> a general polynomial system, the region where ˙V < 0 isdescribed by the a level set of a low degree polynomial.Clearly both of these algorithms would per<strong>for</strong>m better if p were chosen to have levelsets that more closely align with the shape of the region of attraction, but the uninspiredchoice of p here indicates what will happen in higher dimensions where we are unable tovisualize the region of attraction.


784.3 Disturbance <strong>Analysis</strong>The previous local SOS applications were centered on proving stability; we will nowshift gears to consider the local effect of external disturbances on a polynomial system. Awealth of interesting results of this type <strong>for</strong> linear systems are posed in an LMI framework in[5]. Using SOS programming we can, in general, <strong>for</strong>mulate parallel versions <strong>for</strong> polynomialsystems.In most cases the generalization from linear systems <strong>and</strong> LMIs to polynomialsystems <strong>and</strong> SOS programs is straight <strong>for</strong>ward, as is shown in the example below in §4.3.1.However, it is very important to point out that the LMI tests given in [5] provide conditions<strong>for</strong> the system to have a given characteristic globally, which as we have seen in the stability<strong>and</strong> controller synthesis cases may be impossible or just undesired. So, in general, it willmake sense to add a few set containment constraints to make the result hold only on somesubset of the whole space.4.3.1 Reachable Set Bounds under Unit Energy DisturbancesThe original problem in [5, §6.1.1] is, given the linear systemẋ = Ax + B w wwith x ∈ R n <strong>and</strong> w ∈ R nw , can we bound the set of points that are reachable from x 0 = 0in T time units if the disturbance is constrained to have unit energy, ∫ T0 w(t)∗ w(t) dt ≤ 1.The bounding approach used is <strong>Lyapunov</strong> function based. Let the <strong>Lyapunov</strong> function bequadraticV (x) = x ∗ P x


79with P an as of yet unknown positive definite matrix. Then, if˙V (x) = ∇V ∗ (Ax + B w w) ≤ w ∗ w<strong>for</strong> all x, w, we can integrate both sides from 0 to T to findV (x(T )) − V (x 0 ) ≤∫ T0w(t) ∗ w(t) dt ≤ 1which, since x 0 = 0 implies that the set {x ∈ R n |x ∗ P x ≤ 1} contains x(T ). The positivedefiniteness of V <strong>and</strong> the inequality bound on ˙V can be, written in an LMI setup as P ≻ 0<strong>and</strong>⎡⎢A∗P + P A⎣BwP∗⎤P B w⎥⎦ ≼ 0−IThese conditions provide a set that contains the reachable set from x 0 = 0 with unit energydisturbances, <strong>and</strong>, with a cost function dependent on the eigenvalues of P , we could optimizethe bound <strong>for</strong> tightness.Approach <strong>for</strong> <strong>Polynomial</strong> SystemsWe can now follow the reasoning above, as originally done in [13], to mimic thelinear system’s reachable set bound <strong>for</strong> the following polynomial system,ẋ = f(x) + g w (x)w (4.12)with x(t) ∈ R n , w(t) ∈ R nw , f ∈ R n n, f(0) = 0 <strong>and</strong> g w ∈ R n×nwn . The two central constraintsof the <strong>Lyapunov</strong> based approach are that V ∈ R n must be positive definite, <strong>and</strong> that∇V (x) ∗ (f(x) + g w (x)w) ≤ w ∗ w


80<strong>for</strong> all x, w.If we can find a V that satisfies these constraints, we know that the setΩ 1 := {x ∈ R n |V (x) ≤ 1} bounds the unit energy reachable set. At this point, thetransition of the problem from linear to polynomial system is complete <strong>and</strong> we would justneed to write the constraints above as SOS conditions.However, in this set up the ˙V constraint is required to hold <strong>for</strong> all x, w, whichseems extreme since φ t (x 0 ) remains in Ω 1 <strong>for</strong> all t ∈ [0, T ]. Additionally, we have no easyway to minimize the size of the bounding set.To get around the problems presented above, we will introduce a fixed, user chosen,function p ∈ Σ n that is positive definite <strong>and</strong> make the following definitionP β := {x ∈ R n |p(x) ≤ β}of a set that will contain Ω 1 . Additionally, since the state trajectory always remains in Ω 1 ,we will only require that the ˙V inequality hold <strong>for</strong> (x, w) ∈ Ω 1 × R nw . Now we can pose thefollowing optimization to minimize the size of set that bounds the unit energy reachableset, Ω 1 , where the constraints mentioned above have been turned into their set emptiness<strong>for</strong>msminVβ⎧⎪⎨⎪⎩s.t.{x ∈ R n |V (x) ≤ 0, l(x) ≠ 0} = φ{x ∈ R n |V (x) ≤ 1, p(x) ≥ β, p(x) ≠ β} = φ⎫∣ ∣∣∣∣∣∣∣∣∣∣∣V (x) ≤ 1,x ∈ R nx ∈ R nw∇V (x) ∗ (f(x) + g w (x)w) ≥ w ∗ w,∇V (x) ∗ (f(x) + g w (x)w) ≠ w ∗ w⎪⎬⎪⎭= φ


81with l ∈ Σ n , positive definite. Using the Positivstellensatz, this becomesminVβs.t.s 1 − V s 2 + l 2k 1= 0s 3 + (1 − V )s 4 + (p − β)s 5 + (1 − V )(p − β)s 6 + (p − β) 2k 2= 0⎛()⎞s 7 + ∇V (x) ∗ (f(x) + g w (x)w) − w ∗ w s 8 + (1 − V )s 9()+(1 − V ) ∇V (x) ∗ (f(x) + g w (x)w) − w ∗ w s 10= 0⎜⎟⎝()+ ∇V (x) ∗ (f(x) + g w (x)w) − w ∗ 2k3 ⎠wwith s 1 , . . . , s 6 ∈ Σ n , s 7 , . . . , s 10 ∈ Σ n+nw <strong>and</strong> k 1 , k 2 , k 3 ∈ Z + . To begin the transitionfrom set emptiness constraints into SOS constraints, pick k 1 = k 2 = k 3 = 1. For the firstconstraint, set s 2 = l <strong>and</strong> factor out an l. For the second constraint, we can just rearrange it.()And <strong>for</strong> the third constraint, we pick s 7 = s 9 = 0 <strong>and</strong> factor out a ∇V ∗ (f + g w w) − w ∗ w .In all this gives us the optimizationminVβV − l ∈ Σ n(4.13)− (1 − V )s 10 + ∇V ∗ (f + g w w) − w ∗ w ∈ Σ n+nw− ( (1 − V )s 4 + (p − β)s 5 + (1 − V )(p − β)s 6 + (p − β) 2) ∈ Σ n(())which we can solve iteratively by alternating linesearches on the s i ’s <strong>and</strong> V . For furtherdetails <strong>and</strong> a numeric example see [13, §III].s.t.


824.3.2 Set Invariance under Peak Bounded DisturbancesConsidering again a polynomial system subject to disturbances as in (4.12),ẋ = f(x) + g w (x)wwe can now look at finding the maximum peak disturbance value such that a given set remainsinvariant under these bounded disturbances <strong>and</strong> the action of the system’s dynamics.Let the peak of w be bounded by‖w(t)‖ ∞ ≤ √ γ<strong>and</strong> define the invariant set asΩ 1 := {x ∈ R n |V (x) ≤ 1}<strong>for</strong> some fixed V ∈ R n , positive definite. We know that if ˙V (x, w) ≤ 0 on the boundary ofΩ 1 <strong>for</strong> all w meeting the peak bound, then the flow of the system from any point in Ω 1 cannot ever leave Ω 1 , which makes it invariant. In set containment terms we can write thisrelationship as{x ∈ R n , w ∈ R nw |V (x) = 1} ∩ {x ∈ R n , w ∈ R nw |w ∗ w ≤ γ}⊆ {x ∈ R n , w ∈ R nw | ˙V (x, w) ≤ 0} (4.14)which can be rewritten in set emptiness <strong>for</strong>m as{x ∈ R n , w ∈ R nw |V (x) − 1 = 0, γ − w ∗ w ≥ 0, ˙V (x, w) ≥ 0, ˙V (x, w) ≠ 0} = φEmploying the Positivstellensatz, this becomess 0 + s 1 (γ − w ∗ w) + s 2 ( ˙V ) + s 3 (γ − w ∗ w)( ˙V ) + ( ˙V ) 2k + q(V − 1) = 0


83with k ∈ Z + , q ∈ R n+nw <strong>and</strong> s 0 , s 1 , s 2 , s 3 ∈ Σ n+nw .Using our st<strong>and</strong>ard approach of k = 1, we can write the following SOS constraintthat guarantees invariance under bounded w,−s 1 (γ − w ∗ w) − s 2 ( ˙V ) − s 3 (γ − w ∗ w)( ˙V ) − ˙V 2 − q(V − 1) ∈ Σ n+nw (4.15)Notice that this SOS condition has terms that are not linear in the monomials of V , <strong>and</strong>thus there is no way to use our convex optimization approach to adjust V while checkingthis condition. Since (4.15) in linear in γ we can search <strong>for</strong> the maximum peak disturbance<strong>for</strong> which the set is invariant, by searching over q <strong>and</strong> the s i ’s to maximize γ subject to(4.15). We will need to have the following degree relationship hold to make (4.15) possiblyfeasible( ) ( )max deg(s 1 ) + 2, deg(s 2 ˙V ), deg(qV ) ≥ max deg(s 3 ˙V ) + 2, 2 deg( ˙V )If we set x 0 = 0, then the invariant set Ω 1 bounds the system’s reachable setunder disturbances with peak less than γ. This bound is similar, but less stringent, thanthe bound <strong>for</strong> linear systems given in [5].In §4.4.3 we design two state feedback controllers to make the largest region possibleattracted to the origin, <strong>and</strong> then we provide an example of the technique above byfinding the largest peak disturbance that these controllers can reject.Effect of ‖w(t)‖ ∞ on ‖x(t)‖ ∞Using the bounded peak disturbances techniques above to find the largest disturbancepeak value <strong>for</strong> which Ω 1 is invariant, we can then bound the peak size of the system’s


84state to get a relationship that is similar to the induced L ∞ → L ∞ norm from disturbanceto state <strong>for</strong> this invariant set.For a given V , we solve the optimization to find the largest γ such that (4.15) isfeasible. Then we can bound the size of the state by optimizing to find the smallest α suchthatΩ 1 = {x ∈ R n |V (x) ≤ 1} ⊆ {x ∈ R n |x ∗ x ≤ α}This containment constraint is easily solved with a generalized S-procedure following from§2.2.1. From this point we know that the following implication holds‖w(t)‖ ∞ ≤ √ γ ⇒ ‖x(t)‖ ∞ ≤ √ αas long as x 0 ∈ Ω 1 , which provides our induced norm-like bound.As with the previous disturbance analysis technique, we demonstrate this one in§4.4.3 on the designed state feedback controllers.4.3.3 Induced L 2 → L 2 GainConsider the disturbance driven system with outputs (3.3),ẋ = f(x) + g w (x)wy = h(x)with x(t) ∈ R n , w(t) ∈ R nw , y(t) ∈ R p , f ∈ R n n, f(0) = 0, g w ∈ R n×nwn , <strong>and</strong> h ∈ R p n withh(0) = 0. In §3.3 we studied this system’s global induced L 2 → L 2 gain from w to y, <strong>and</strong>now we look to bound this gain on some invariant region.For a region, Ω 1= {x ∈ R n |V (x) ≤ 1} as in §4.3.2, that is invariant underdisturbances with ‖w(t)‖ ∞ ≤ √ γ, we can bound the induced L 2 → L 2 gain from w to y on


85this invariant set by finding a positive definite H ∈ R n <strong>and</strong> β ≥ 0 such that the followingset containment holds{x ∈ R n , w ∈ R nw |w ∗ w ≤ γ} ∩ {x ∈ R n , w ∈ R nw |V (x) ≤ 1}⊆ {x ∈ R n , w ∈ R nw |Ḣ(x, w) + h(x)∗ h(x) − βw ∗ w ≤ 0} (4.16)If we can find a β, H pair to make (4.16) hold, then we can follow the steps from §3.3 toshow thatprovided that x 0 = 0 <strong>and</strong> ‖w(t)‖ ∞ ≤ √ γ.‖y(t)‖ 2‖w(t)‖ 2≤ √ βWe can search <strong>for</strong> the tightest bound on the induced norm by employing a generalizedS-procedure to satisfy (4.16) <strong>and</strong> solving the following optimizationminH∈R nβs.t.(4.17)H − l ∈ Σ n)−(Ḣ(x, w) + h(x) ∗ h(x) − βw ∗ w − s 1 (γ − w ∗ w) − s 2 (1 − V ) ∈ Σ n+nwwith s 1 , s 2 ∈ Σ n+nw<strong>and</strong> l ∈ Σ n , positive definite.<strong>and</strong> s 2 so thatIn an ef<strong>for</strong>t to make the optimization (4.17) feasible we will pick the degrees of s 1deg(s 1 ) + 2 ≥ deg(Ḣ + h∗ h)<strong>and</strong>deg(s 2 ) + deg(V ) ≥ deg(Ḣ + h∗ h)Once more, we will exhibit this technique to study the per<strong>for</strong>mance of the statefeedback controllers that we design in §4.4.3.


864.4 State Feedback <strong>Controller</strong> DesignIn the previous section, we found two SOS programming based approaches to proveasymptotic stability <strong>for</strong> a polynomial system <strong>and</strong> estimate the domain of attraction of thefixed point at the origin. We now exp<strong>and</strong> these approaches <strong>for</strong> state feedback controllerdesign.Consider again the system (3.5)ẋ = f(x) + g(x)u<strong>for</strong> x ∈ R n with f ∈ R n n, f(0) = 0 <strong>and</strong> u ∈ R m with g ∈ R n×mn . If we allow u to begenerated by a state feedback controller K ∈ R m nwith K(0) = 0, we get the closed loopsystem (3.6)ẋ = f(x) + g(x)K(x)where K is still unknown. We can now look to find state feedback controller design algorithmsthat parallel Algorithms 5 <strong>and</strong> 6.4.4.1 Exp<strong>and</strong>ing D Algorithm <strong>for</strong> State Feedback DesignWe will follow the steps of the development of the exp<strong>and</strong>ing D algorithm in §4.2.1to develop a state feedback controller design algorithm. As be<strong>for</strong>e we restrict V ∈ R n withV (0) = 0 <strong>and</strong> describe D with a semi-algebraic setD := {x ∈ R n |p(x) ≤ β}


87with p ∈ Σ n , positive definite, <strong>and</strong> β ≥ 0.Now the requirements of Theorem 10 <strong>for</strong>asymptotic stability <strong>for</strong> the closed loop system (3.6) become{x ∈ R n |p(x) ≤ β} \ {0} ⊆ {x ∈ R n |V (x) > 0}{x ∈ R n |p(x) ≤ β} \ {0} ⊆ {x ∈ R n |∇V (x) ∗ (f(x) + g(x)K(x) < 0}If we can find V, K to satisfy these conditions <strong>for</strong> a fixed p <strong>and</strong> any value of β > 0, then thesystem (3.1) is asymptotically stable about the fixed point x = 0.As in Algorithm 5, the largest invariant set we can demonstrate that converges tothe origin is the largest level set of V that is contained in D. Following the manipulationsthat got us to SOS constraints in stability case we find that maximizing D under the setcontainments above is equivalent tomaxV ∈ R n, V (0) = 0βK ∈ R m n , K(0) = 0s i ∈ Σ ns.t.(4.18)−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1 ∈ Σ nAgain, since the SOS conditions are trilinear in V, K <strong>and</strong> the s’s they will haveto be checked iteratively. Also, as be<strong>for</strong>e, the largest estimate of the region of attractionis the largest invariant set contained in D <strong>and</strong> is found using the same optimization as in§4.2.1. The following algorithm uses a nested iterative approach to maximize the size ofD by adjusting V , K <strong>and</strong> the s’s <strong>and</strong> then finds largest level set of V contained in D toestimate the region of attraction.


88Algorithm 7 (Exp<strong>and</strong>ing D State Feedback Design) An iterative search to exp<strong>and</strong>the region D in Theorem 10 starting from a c<strong>and</strong>idate <strong>Lyapunov</strong> function <strong>and</strong> a c<strong>and</strong>idatecontroller.Let i be the iteration index <strong>and</strong> set i = 1. Denote the c<strong>and</strong>idate <strong>Lyapunov</strong> functionV (i=0) <strong>and</strong> the c<strong>and</strong>idate controller K (i=0) .Pick the maximum degree of the <strong>Lyapunov</strong>function, the controller, the SOS multipliers <strong>and</strong> the l polynomials to be d V , d K ,d s2 , d s3 , d s4 , d s6 , d s7 , d s8 <strong>and</strong> d l1 , d l2 respectively. Pick the maximum degrees <strong>for</strong> the the levelset maximization problem (4.2) <strong>and</strong> denote them dŝ2 , dŝ3<strong>and</strong> dŝ4 . Fix l k (x) = ɛ ∑ nj=1 xd l kj<strong>for</strong> k = 1, 2 <strong>and</strong> some ɛ > 0. Additionally set β (i=0) = 0.1. SOS Weight Iteration:Set V = V (i−1) <strong>and</strong> K = K (i−1) <strong>and</strong> solve the linesearch on β where s 2 ∈ Σ n,ds2 ,s 3 ∈ Σ n,ds3 , s 4 ∈ Σ n,ds4 , s 6 ∈ Σ n,ds6 , s 7 ∈ Σ n,ds7<strong>and</strong> s 8 ∈ Σ n,ds8maxs 2 ,s 3 ,s 4 ,s 6 ,s 4 ,s 8βs.t.−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1 ∈ Σ n−(β − p)s 6 − ∇V ∗ (f + gK)s 7 − ∇V ∗ (f + gK)(β − p)s 8 − l 2 ∈ Σ nSet s (i)7 = s 7 , s (i)8 = s 8 <strong>and</strong> continue to step 2.2. <strong>Controller</strong> Iteration:Set V = V (i−1) , s 7 = s (i)7 <strong>and</strong> s 8 = s (i)8 . Solve the linesearch on β where s 2 ∈ Σ n,ds2 ,


89s 3 ∈ Σ n,ds3 , s 4 ∈ Σ n,ds4 , s 6 ∈ Σ n,ds6 <strong>and</strong> K ∈ R m n,d Kwith K(0) = 0maxK,s 2 ,s 3 ,s 4 ,s 6βs.t.−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1 ∈ Σ n−(β − p)s 6 − ∇V ∗ (f + gK)s 7 − ∇V ∗ (f + gK)(β − p)s 8 − l 2 ∈ Σ nSet K (i) = K, s (i)3 = s 3 <strong>and</strong> s (i)4 = s 4 . Continue to step 3.3. <strong>Lyapunov</strong> function iterationSet K = K (i) , s 3 = s (i)3 , s 4 = s (i)4 , s 7 = s (i)7 <strong>and</strong> s 8 = s (i)8 . Solve the linesearch on βwhere s 2 ∈ Σ n,ds2 , s 6 ∈ Σ n,ds6 <strong>and</strong> V ∈ R n,dV with V (0) = 0maxV,s 2 ,s 6βs.t.−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1 ∈ Σ n−(β − p)s 6 − ∇V ∗ (f + gK)s 7 − ∇V ∗ (f + gK)(β − p)s 8 − l 2 ∈ Σ nSet V (i) = V <strong>and</strong> continue to step 4.4. If β (i) − β (i−1) is less than a specified tolerance goto step 5, else increment i <strong>and</strong> gotostep 1.5. Set V = V (i) <strong>and</strong> β = β (i) . Solve the linesearch on c where ŝ 2 ∈ Σ n,dŝ2 , ŝ 3 ∈ Σ n,dŝ3<strong>and</strong> ŝ 4 ∈ Σ n,dŝ4maxŝ 2 ,ŝ 3 ,ŝ 4cs.t.−ŝ 2 (c − V ) − ŝ 3 (p − β) − ŝ 4 (p − β)(c − V ) − (p − β) 2 ∈ Σ n


90Set c max = c. The set {x ∈ R n |V (x) ≤ c max } is the resulting estimate of the regionof attraction, <strong>for</strong> it is positively invariant, contained within D <strong>and</strong> all of its pointsconverge to the fixed point x = 0.Remark 6 (Properties of the exp<strong>and</strong>ing D state feedback algorithm) :• As with the stability version of the album, if the algorithm is started from a feasiblepoint it will always remain feasible. If the system’s linearization is controllable, thena linear controller that stabilizes the linearized system <strong>and</strong> the resulting quadratic<strong>Lyapunov</strong> function will always work, since they will stabilize the nonlinear systemnear to the origin. Since we can be guaranteed of a feasible starting point, we do notneed to consider K <strong>and</strong> V variants of the algorithm.• As with the stability version, V need not be positive definite, since it is required to bepositive only on D \{0}. However to be positive over a region about the origin V musthave no linear terms <strong>and</strong> d Vshould be even. Clearly if V were chosen to be positivedefinite, then the first constraint in (4.18) can just become V − l 1 ∈ Σ n .• Since the s’s, l’s <strong>and</strong> the ŝ’s are SOS, they must be of even degree. Additionally, thedegrees need to be chosen so that following relations hold:For the first SOS constraintFor the second SOS constraintmax{deg(ps 2 ), deg(V s 3 )} ≥ d l1max{deg(ps 2 ), deg(V s 3 )} ≥ deg(V ps 4 )deg(ps 6 ) ≥ d l2deg(ps 6 ) ≥ max {deg (∇V ∗ (f + gK)s 7 ) , deg (∇V ∗ (f + gK)ps 8 )}


91For the c maximization constraintmax{deg(ŝ 2 V ), deg(ŝ 4 pV )} ≥ deg(p 2 )max{deg(ŝ 2 V ), deg(ŝ 4 pV )} ≥ deg(ŝ 3 p)4.4.2 Exp<strong>and</strong>ing Interior Algorithm <strong>for</strong> State Feedback DesignAs with the exp<strong>and</strong>ing D algorithm, we can now adapt the exp<strong>and</strong>ing interioralgorithm to search <strong>for</strong> state feedback controllers to stabilize (3.5) as was proposed in [13].As with the stability version, we define a variable sized regionP β := {x ∈ R n |p(x) ≤ β}<strong>for</strong> p ∈ Σ n <strong>and</strong> positive definite. We then defineD := {x ∈ R n |V (x) ≤ 1}<strong>for</strong> some as of yet unknown c<strong>and</strong>idate <strong>Lyapunov</strong> function. Following the developments <strong>for</strong>the stability case we know that we need{x ∈ R n |p(x) ≤ β} ⊆ {x ∈ R n |V (x) ≤ 1}{x ∈ R n |V (x) ≤ 1} \ {0} ⊆({x ∈ R n |∇V ∗ (x))f(x) + g(x)K(x) < 0}{x ∈ R n |V (x) ≤ 0, x ≠ 0} = φ


92with V (0) = 0 to satisfy the assumptions of Theorem 10. Maximizing the size of P β subjectto these set emptiness <strong>and</strong> containment constraints can be re<strong>for</strong>mulated asmaxV ∈ R n, V (0) = 0βK ∈ R m n , K(0) = 0s 6 , s 8 , s 9 ∈ Σ ns.t.(4.19)− ( (1 − V )s 8 + ∇V ∗ )(f + gK)s 9 + l 2 ∈ ΣnV − l 1 ∈ Σ n− ( (β − p)s 6 + (V − 1) ) ∈ Σ nWe now can propose an algorithm to design a controller to enlarge our estimate ofthe region of attraction <strong>for</strong> the fixed point, x = 0, of (3.5) by solving the SOS constrainedoptimization (4.19) with a nested iteration. For the controller synthesis part of the iterationa slight modification is made to the third constraint of (4.19) by introducing an intermediatevariable α <strong>and</strong> exp<strong>and</strong>ing the <strong>Lyapunov</strong> level set {x ∈ R n |V (x) < α}. After the controlleris designed, the <strong>Lyapunov</strong> function constraint is rescaled by α.Algorithm 8 (Exp<strong>and</strong>ing Interior State Feedback Design) An iterative search to exp<strong>and</strong>the region P β <strong>and</strong> thus the region D := {x ∈ R n |V (x) ≤ 1} in Theorem 10 startingfrom a positive definite c<strong>and</strong>idate <strong>Lyapunov</strong> function <strong>and</strong> a c<strong>and</strong>idate state feedback controller.Let i be the iteration index <strong>and</strong> set i = 1. Denote the c<strong>and</strong>idate <strong>Lyapunov</strong> functionV (i=0) <strong>and</strong> the c<strong>and</strong>idate state feedback controller K (i=0) . Pick the maximum degree of the<strong>Lyapunov</strong> function, the controller, the SOS multipliers <strong>and</strong> the l polynomials to be d V , d K ,d s6 , d s8 , d s9 <strong>and</strong> d l1 , d l2 respectively. Fix l k (x) = ɛ ∑ nj=1 xd l kj<strong>for</strong> k = 1, 2 <strong>and</strong> some ɛ > 0.


93Additionally set l (i=0)2 = l 2 , <strong>and</strong> β (i=0) = 0.1. SOS Weight iteration:Set V = V (i−1) , K = K (i−1) <strong>and</strong> l 2 = l (i−1)2 . Solve the linesearch on α where s 8 ∈Σ n,ds8 <strong>and</strong> s 9 ∈ Σ n,ds9maxs 8 ,s 9s.t.()−((α − V )s 8 + ∇V ∗ f + gK)s 9 + l 2α∈Σ nSet s (i)9 = s 9 <strong>and</strong> continue to step 2.2. <strong>Controller</strong> iteration:Set V = V (i−1) , l 2 = l (i−1)2 , <strong>and</strong> s 9 = s (i)9 . Solve the linesearch on α where s 8 ∈ Σ n,ds8<strong>and</strong> K ∈ R m n,d Kwith K(0) = 0maxK,s 8s.t.()−((α − V )s 8 + ∇V ∗ f + gK)s 9 + l 2α∈Σ nSet K (i) = K, s (i)8 = s 8 <strong>and</strong> l (i)2 = l (i−1)2 /α. Continue to step 2.3. <strong>Lyapunov</strong> function iteration:Set K = K (i) , s 8 = s (i)8 , s 9 = s (i)9 <strong>and</strong> l 2 = l (i)2 . Solve the linesearch on β where


94s 6 ∈ Σ n,ds6 <strong>and</strong> V ∈ R n,dV with V (0) = 0maxV,s 6βs.t.V − l 1 ∈ Σ n− ( (β − p)s 6 + (V − 1) ) ∈ Σ n(− (1 − V )s 8 + ˙V)s 9 + l 2 ∈Set V (i) = V <strong>and</strong> β (i) = β. Continue to step 4.Σ n4. If β (i) − β (i−1) is less than a specified tolerance, the set {x ∈ R n |V (i) (x) ≤ 1} containsP β (i) <strong>and</strong> is the largest estimate of the fixed point’s region of attraction, else incrementi <strong>and</strong> goto step 1.Remark 7 (Properties of the exp<strong>and</strong>ing interior algorithm) :• If the algorithm is started from a feasible point, then it will always remain feasible.If the system’s linearization is controllable, then any linear controller that stabilizesthe linearized system <strong>and</strong> the corresponding quadratic <strong>Lyapunov</strong> function will alwayswork, since they will stabilize the nonlinear system near to the origin. Again, thisfeasibility property lets us not consider both a V <strong>and</strong> a K variant.• Note that, unlike Algorithm 5, V must be positive definite, so it should have no linearterms <strong>and</strong> d Vshould be even.• Since the s’s, <strong>and</strong> the l’s are SOS, they must be of even degree.Additionally, thedegrees need to be chosen so that following relations hold:


95For the first SOS constraintd V = d l1For the second SOS constraintdeg(ps 6 ) ≥ d VFor the third SOS constraint)deg(V s 8 ) ≥ deg(∇V ∗ (f + gK)s 9deg(V s 8 ) = d l24.4.3 State Feedback Design ExampleConsider the spring-mass-damper system from §3.4.3, <strong>and</strong> recall that it was possibleto design a degree three polynomial state feedback controller to make the systemsemi-globally exponentially stable. However, no lower degree controller could be found thatmet the stability requirements. Also, if the linear spring were replaced with a nonlinearspring that behaved the same as the one that fixed m 1 to the wall, then no polynomialcontroller can be found to make the system semi-globally exponentially stable <strong>for</strong> d K ≤ 5<strong>and</strong> d V ≤ 6.We now look to find a local controller to demonstrate stability of the spring-massdampersystem with two nonlinear springs⎡ ⎤ ⎡⎤ ⎡ ⎤ẋ 1x 20ẋ 2−(x 1 + 110 =x3 1 ) + ( (x 3 − x 1 ) + 110 (x 3 − x 1 ) 3) − 110 (x 4 − x 2 )1+u (4.20)ẋ 3x 40⎢ ⎥ ⎢⎥ ⎢ ⎥⎣ ⎦ ⎣ẋ 4 − ( (x 3 − x 1 ) + 110 (x 3 − x 1 ) 3) ⎦ ⎣ ⎦+ 110 (x 4 − x 2 )0} {{ } }{{}f(x)g(x)


96As be<strong>for</strong>e, the linearization of the system is unstable <strong>and</strong> we will use the linear controller<strong>and</strong> quadratic <strong>Lyapunov</strong> function from the “feedback trick” LMI (3.13) as the c<strong>and</strong>idatecontroller <strong>and</strong> <strong>Lyapunov</strong> function. To show an additional use <strong>for</strong> Algorithms 7 & 8 we willalso tack on the constraint that the system model (4.20) is only valid when ‖x‖ ≤ 10.From the perspective of the exp<strong>and</strong>ing D algorithm, this model validity constraintcan be h<strong>and</strong>led by making D := x 1 1 + x2 2 + x2 3 + x2 4<strong>and</strong> setting the upper bound on theβ linesearches to be 100. With this choice of D, we know that the largest level set of theresulting <strong>Lyapunov</strong> function will be within the region where the model is valid, so anyinitial condition within this level set will converge to the origin <strong>and</strong> always remain withinthe region where the model is valid.To h<strong>and</strong>le the model validity constraint within the context of the exp<strong>and</strong>ing interioralgorithm, set p = x 1 1 + x2 2 + x2 3 + x2 4<strong>and</strong> the upper bound on β in all the linesearches to100. Unlike the exp<strong>and</strong>ing D case, this does not yet take care of the constraint. When thealgorithm finishes we can do a bisection on a generalized S-procedure to find the largestvalue r such that{x ∈ R n |V (x) ≤ r} ⊆ {x ∈ R n |x 2 1 + x 2 2 + x 2 3 + x 2 4 ≤ 100}Then all initial conditions in the set where V is less than or equal to r converge to the origin<strong>and</strong> always remain in the region where the model is valid.Using these approaches we can solve <strong>for</strong> a state feedback controller <strong>for</strong> (4.20). Weuse Algorithm 7 with d V = 2, d K = 1, d s2 = d s3 = 2, d s4 = 0, d s6 = 4, d s7 = d s8 = 0,d l1 = d l2 = 4, with dŝ2 = dŝ3 = 2 <strong>and</strong> dŝ4 = 0. For Algorithm 8 we use d V = 2, d K = 1,d s6 = d s8 = 2, d s9 = 0, with d l1 = 2 <strong>and</strong> d l2 = 4. For both algorithms we set the tolerance


97to 0.01.After 4 iterations Algorithm 7 converges to β = 100 with c max = 15.10, whileAlgorithm 8 reaches the β tolerance limit after 24 iterations at β = 14.96 with r = 10.01.For clarity, let the controller <strong>and</strong> <strong>Lyapunov</strong> function found by the exp<strong>and</strong>ing Dalgorithm be K d <strong>and</strong> V d . Likewise, let the results from the exp<strong>and</strong>ing interior algorithm beK i <strong>and</strong> V i . To summarize the results, under the linear controller⎡ ⎤x 1[]x 2K d (x) = −62.58 −26.31 36.08 −27.06x 3⎢ ⎥⎣ ⎦x 4the system (4.20) is asymptotically stable <strong>and</strong> the region Ω d := {x ∈ R n |V d (x) ≤ c max }with⎡ ⎤x 1x 2V d (x) =x 3⎢ ⎥⎣ ⎦x 4∗ ⎡⎢⎣4.82 0.72 −1.83 2.470.72 0.27 −0.40 0.34−1.83 −0.40 2.54 −0.312.47 0.34 −0.31 3.02⎤ ⎡ ⎤x 1x 2x 3⎥ ⎢ ⎥⎦ ⎣ ⎦x 4is an invariant set contained in the fixed point’s region of attraction as well as the set wherethe model is valid, {x ∈ R n |‖x‖ ≤ 10}. Additionally under the controller⎡ ⎤x 1[]x 2K i (x) = −35.57 −7.70 22.40 −14.56x 3⎢ ⎥⎣ ⎦x 4


98the system (4.20) is asymptotically stable <strong>and</strong> the region Ω i := {x ∈ R n |V i (x) ≤ r} with⎡ ⎤∗ ⎡⎤ ⎡ ⎤x 14.33 0.35 −2.19 1.89x 1x 20.35 0.14 −0.25 0.19x 2V i (x) =x 3−2.19 −0.25 2.45 −0.63x 3⎢ ⎥ ⎢⎥ ⎢ ⎥⎣ ⎦ ⎣⎦ ⎣ ⎦x 4 1.89 0.19 −0.63 2.36 x 4is an invariant set contained in the fixed point’s region of attraction as well as the set wherethe model is valid, {x ∈ R n |‖x‖ ≤ 10}.Comparing the two controllers we find that the volume of Ω d is 779.74, while Ω ihas smaller volume of 535.75.Interestingly, Ω d does not contain Ω i , <strong>and</strong> r would haveto scaled back by 30% to make Ω i fit inside of Ω d . Since Ω d has a larger volume, K d isthe preferred controller, unless there is specific in<strong>for</strong>mation about the initial condition thatshows that it is in Ω i \ Ω d .Disturbance Rejection Qualities of the <strong>Controller</strong>sUsing the disturbance analysis techniques of §4.3, we can now quantify the per<strong>for</strong>manceof the controllers K d <strong>and</strong> K i designed above. The approaches of §4.3.2 <strong>and</strong> §4.3.3allow us to find the largest peak disturbance under which Ω d <strong>and</strong> Ω i are invariant, thepeak norm of the state that this implies, <strong>and</strong> the L 2 → L 2 gains from a disturbance to thesystem’s states on these invariant regions.For all these analyses we will assume that the disturbance enters additively in thecontrol channel, which makes g w (x) = g(x). Additionally, since we are considering a statefeedback system, we will set h(x) = x <strong>for</strong> the induced L 2 → L 2 gain.To compute, √ γ, the bound on ‖w(t)‖ under which Ω d <strong>and</strong> Ω i are invariant, the


99K dK i√ γ 18.57 4.42√ α 10.07 10.00√ β 0.06 0.27Table 4.2: State Feedback <strong>Controller</strong> Disturbance Rejection Results.condition (4.15) becomes, at minimum, a search <strong>for</strong> SOS polynomials of degree 8 in 5variables, since n = 4, n w = 1 <strong>and</strong> deg( ˙V ) = 4. <strong>Polynomial</strong>s in Σ 5,8 have 1287 coefficients.To work around searching <strong>for</strong> polynomials with so many coefficients, we will setq = ˙V 2ˆq <strong>and</strong> s i = ˙V 2 ŝ i <strong>for</strong> i = 1, 2, 3. This allows us to factor out a ˙V 2 term to get thefollowing condition−ŝ 1 (γ − w ∗ w) − ŝ 2 ( ˙V ) − ŝ 3 (γ − w ∗ w)( ˙V ) − 1 − ˆq(V − 1) ∈ Σ n+nwwhich, if we pick deg(ŝ 1 ) = 4, deg(ŝ 2 ) = 2, deg(ŝ 3 ) = 0 <strong>and</strong> deg(ˆq) = 4, becomes a search<strong>for</strong> a polynomial of degree 6 in 5 variables. <strong>Polynomial</strong>s in Σ 5,6 have 462 terms, which is avery good improvement in problem size.Using this approach to reduce the problem size of (4.15) <strong>and</strong> picking deg H = 2<strong>and</strong> deg s 1 = deg s 2 = 0 in (4.17), we get the results of the three disturbance rejectionanalyses that are shown in Table 4.2. Note that the bounds are as follows, the respectiveinvariant set is invariant under‖w(t)‖ ∞ ≤ √ γ<strong>and</strong>‖w(t)‖ ∞ ≤ √ γ ⇒ ‖x(t)‖ ∞ ≤ √ α


100as well as, if ‖w(t)‖ ∞ ≤ √ γ then‖x(t)‖ 2‖w(t)‖ 2≤ √ βThe results shown in the table illustrate that under all three criterion considered, thecontroller designed by the exp<strong>and</strong>ing D algorithm out per<strong>for</strong>ms the controller designed bythe exp<strong>and</strong>ing interior algorithm. The superior disturbance rejection together with the factthat Ω d is much larger than Ω i point to K d being the preferred controller <strong>for</strong> this example.4.5 Output Feedback <strong>Controller</strong> DesignAs in the global case, we can now exp<strong>and</strong> the state feedback results by allowingthe controller to be dynamic. As with the local asymptotic stability <strong>and</strong> state feedbackcontroller design problems, we will provide an exp<strong>and</strong>ing D approach as well as an exp<strong>and</strong>inginterior approach to design a controller to demonstrate the largest estimate of the region ofattraction.Following the global case, define the same system to be controlled (3.14)ẋy= f(x) + g(x)u= h(x)<strong>for</strong> x ∈ R n with f ∈ R n n, f(0) = 0, u ∈ R m , g ∈ R n×mn with y ∈ R p , h ∈ R p n <strong>and</strong> h(0) = 0.u is generated by an unknown n ξ -state dynamic output feedback controller (3.15)˙ξu= A(ξ) + B(ξ)y= C(ξ) + D(ξ)y<strong>for</strong> ξ ∈ R n ξwith A ∈ R n ξn ξ, A(0) = 0, B ∈ R n ξ×p, C ∈ R m n ξ, C(0) = 0 <strong>and</strong> D ∈ R m×p . Withn ξn ξ


101this controller structure, the closed loop system becomes (3.16)ẋ˙ξ= f(x) + g(x)(C(ξ) + D(ξ)h(x))= A(ξ) + B(ξ)h(x).By the assumptions on f, h, A <strong>and</strong> C, we know that the combined system has a fixed pointat [x; ξ] = [0; 0].From this point we can adapt Algorithms 7 <strong>and</strong> 8 to solve <strong>for</strong> a stabilizing outputfeedback controller. Again, as in the global output feedback case, when the systems arelinear <strong>and</strong> the <strong>Lyapunov</strong> function is quadratic we can not use the “feedback trick” to solvea single LMI <strong>for</strong> a feasible c<strong>and</strong>idate controller <strong>and</strong> <strong>Lyapunov</strong> function pair.4.5.1 Exp<strong>and</strong>ing D Algorithm <strong>for</strong> Output Feedback DesignFollowing the development of the state feedback exp<strong>and</strong>ing D algorithm we c<strong>and</strong>evelop a output feedback controller design algorithm. First we fix the number of states inthe output feedback controller, n ξ ∈ Z + , restrict the <strong>Lyapunov</strong> function V ∈ R n+nξwithV (0) = 0 <strong>and</strong> define D with a semi-algebraic setD := {[x; ξ] ∈ R n+n ξ| p([x; ξ]) ≤ β}with p ∈ Σ n+nξ , positive definite, <strong>and</strong> β ≥ 0.The requirements of Theorem 10 <strong>for</strong> asymptotic stability of the closed loop system


102(3.16) become{[x; ξ] ∈ R n+n ξ| p([x; ξ]) ≤ β} \ {0} ⊆ {[x; ξ] ∈ R n+n ξ| V ([x; ξ]) > 0}{[x; ξ] ∈ R n+n ξ| p([x; ξ]) ≤ β} \ {0} ⊆⎧⎡⎤⎪⎨⎫⎪ [x; ξ] ∈ R n+n ξ∇V ([x; ξ]) ∗ ⎢f(x) + g(x)(C(ξ) + D(ξ)h(x))⎬⎥⎣⎦⎪⎩< 0 ⎪∣A(ξ) + B(ξ)h(x)⎭The system is asymptotically stable about the fixed point [x; ξ] = 0, if any V, A, B, C, Dcan be found to satisfy these requirements <strong>for</strong> a fixed p <strong>and</strong> any β > 0.Following Algorithm 5, the largest estimate of the region of attraction that wecan demonstrate is the largest level set of V that is contained in D.Using the samemanipulations as in the state feedback case we get that maximizing the size of the regionD under the set containments above is equivalent tomaxβV ∈ R n+nξ , V (0) = 0, s i ∈ Σ n+nξA ∈ R n ξn ξ, A(0) = 0, B ∈ R n ξ ×pn ξC ∈ R m n ξ, C(0) = 0, D ∈ R m×pn ξs.t.−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1 ∈ Σ n+nξ⎡⎤(4.21)−(β − p)s 6 − ∇V ∗ ⎢f + gC + gDh⎥⎣⎦ s 7A + Bh⎡⎤−∇V ∗ ⎢f + gC + gDh⎥⎣⎦ (β − p)s 8 − l 2 ∈ Σ n+nξA + BhSince the SOS conditions are not linear in the decision variables, we will have to solve theoptimization iteratively.


103Algorithm 9 (Exp<strong>and</strong>ing D Output Feedback Design) An iterative search to exp<strong>and</strong>the region D in Theorem 10 starting from a c<strong>and</strong>idate <strong>Lyapunov</strong> function <strong>and</strong> a c<strong>and</strong>idateoutput feedback controller with n ξ states.Let i be the iteration index <strong>and</strong> set i = 1. Denote the c<strong>and</strong>idate <strong>Lyapunov</strong> functionV (i=0) <strong>and</strong> the c<strong>and</strong>idate controller A (i=0) , B (i=0) , C (i=0) , D (i=0) . Pick the maximum degreeof the <strong>Lyapunov</strong> function, the controller, the SOS multipliers <strong>and</strong> the l polynomials to bed V , d A , d B , d C , d D , d s2 , d s3 , d s4 , d s6 , d s7 , d s8 <strong>and</strong> d l1 , d l2 respectively. Pick the maximumdegrees <strong>for</strong> the the level set maximization problem (4.2) <strong>and</strong> denote them dŝ2 , dŝ3 <strong>and</strong> dŝ4 .( ∑nFix l k (x) = ɛj=1 xd l kj+ ∑ )n ξj=1 ξd l kj<strong>for</strong> k = 1, 2 <strong>and</strong> some ɛ > 0. Additionally setβ (i=0) = 0.1. SOS Weight Iteration:Set V = V (i−1) , A = A (i−1) , B = B (i−1) , C = C (i−1) <strong>and</strong> D = D (i−1) . Solve thelinesearch on β where s 2 ∈ Σ n+nξ ,d s2, s 3 ∈ Σ n+nξ ,d s3, s 4 ∈ Σ n+nξ ,d s4, s 6 ∈ Σ n+nξ ,d s6,s 7 ∈ Σ n+nξ ,d s7<strong>and</strong> s 8 ∈ Σ n+nξ ,d s8maxs 2 ,s 3 ,s 4 ,s 6 ,s 4 ,s 8βs.t.−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1 ∈ Σ n+nξ⎡⎤−(β − p)s 6 − ∇V ∗ ⎢f + gC + gDh⎥⎣⎦ s 7A + Bh⎡⎤−∇V ∗ ⎢f + gC + gDh⎥⎣⎦ (β − p)s 8 − l 2 ∈ Σ n+nξA + BhSet s (j)7 = s 7 <strong>and</strong> s (j)8 = s 8 . Continue to step 2.


1042. <strong>Controller</strong> Iteration:Set V = V (i−1) , s 7 = s (i)7 <strong>and</strong> s 8 = s (i)8 . Solve the linesearch on β where s 2 ∈ Σ n+nξ ,d s2,s 3 ∈ Σ n+nξ ,d s3, s 4 ∈ Σ n+nξ ,d s4, s 6 ∈ Σ n+nξ ,d s6, A ∈ R n ξn ξ ,d A, A(0) = 0, B ∈ R n ξ×pn ξ ,d B,C ∈ R m n ξ ,d Cwith C(0) = 0 <strong>and</strong> D ∈ R m×pn ξ ,d DmaxA, B, C, Dβs 2 , s 3 , s 4 , s 6s.t.−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1 ∈ Σ n+nξ⎡⎤−(β − p)s 6 − ∇V ∗ ⎢f + gC + gDh⎥⎣⎦ s 7A + Bh⎡⎤−∇V ∗ ⎢f + gC + gDh⎥⎣⎦ (β − p)s 8 − l 2 ∈ Σ n+nξA + BhSet A (i) = A, B (i) = B, C (i) = C, D (i) = D, s (i)3 = s 3 <strong>and</strong> s (i)4 = s 4 . Continue to step3.3. <strong>Lyapunov</strong> function iterationSet A = A (i) , B = B (i) , C = C (i) , D = D (i) , s 3= s (i)3 , s 4 = s (i)4 , s 7 = s (i)7<strong>and</strong> s 8 = s (i)8 . Solve the linesearch on β where s 2 ∈ Σ n+nξ ,d s2, s 6 ∈ Σ n+nξ ,d s6<strong>and</strong>


105V ∈ R n+nξ ,d Vwith V (0) = 0maxV,s 2 ,s 6βs.t.−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1 ∈ Σ n+nξ⎡⎤−(β − p)s 6 − ∇V ∗ ⎢f + gC + gDh⎥⎣⎦ s 7A + Bh⎡⎤−∇V ∗ ⎢f + gC + gDh⎥⎣⎦ (β − p)s 8 − l 2 ∈ Σ n+nξA + BhSet β (i) = β <strong>and</strong> V (i) = V . Continue to step 4.4. If β (i) − β (i−1) is less than a specified tolerance goto step 5, else increment i <strong>and</strong> gotostep 1.5. Set V = V (i) <strong>and</strong> β = β (i) . Solve the linesearch on c where ŝ 2 ∈ Σ n+nξ ,dŝ2 , ŝ 3 ∈Σ n+nξ ,dŝ3 <strong>and</strong> ŝ 4 ∈ Σ n+nξ ,dŝ4maxŝ 2 ,ŝ 3 ,ŝ 4cs.t.−ŝ 2 (c − V ) − ŝ 3 (p − β) − ŝ 4 (p − β)(c − V ) − (p − β) 2 ∈ Σ nSet c max = c. The set {[x; ξ] ∈ R n+n ξ|V ([x; ξ]) ≤ c max } is the resulting estimate ofthe region of attraction, <strong>for</strong> it is positively invariant, contained within D <strong>and</strong> all ofits points converge to the fixed point x = 0.Remark 8 (Properties of the exp<strong>and</strong>ing D output feedback algorithm) :


106• As with the stability <strong>and</strong> state feedback versions, if the algorithm is started from afeasible point it will always remain feasible. If the system’s linearization is controllable,then a linear output feedback controller that stabilizes the linearized system <strong>and</strong> theresulting quadratic <strong>Lyapunov</strong> function will always work, since they will stabilize thenonlinear system near to the origin. Again, this allows us to not consider separatealgorithms starting from the c<strong>and</strong>idate controller <strong>and</strong> from the c<strong>and</strong>idate <strong>Lyapunov</strong>function.• As with the previous versions, V need not be positive definite, since it is required to bepositive only on D \{0}. However to be positive over a region about the origin V musthave no linear terms <strong>and</strong> d Vshould be even. Clearly if V were chosen to be positivedefinite, then the first constraint in (4.21) can just become V − l 1 ∈ Σ n+nξ .• Since the s’s, l’s <strong>and</strong> the ŝ’s are SOS, they must be of even degree. Additionally, thedegrees need to be chosen so that following relations hold:For the first SOS constraintFor the second SOS constraintmax{deg(ps 2 ), deg(V s 3 )} ≥ d l1max{deg(ps 2 ), deg(V s 3 )} ≥ deg(V ps 4 )deg(ps 6 ) ≥ d l2⎛ ⎡⎤ ⎞deg(ps 6 ) ≥ deg ⎜⎝ ∇V ∗ ⎢f + gC + gDh⎥⎣⎦ s ⎟7⎠A + Bh⎛ ⎡⎤ ⎞deg(ps 6 ) ≥ deg ⎜⎝ ∇V ∗ ⎢f + gC + gDh⎥⎣⎦ ps ⎟8⎠A + Bh


107For the c maximization constraintmax{deg(ŝ 2 V ), deg(ŝ 4 pV )} ≥ deg(p 2 )max{deg(ŝ 2 V ), deg(ŝ 4 pV )} ≥ deg(ŝ 3 p)4.5.2 Exp<strong>and</strong>ing Interior Algorithm <strong>for</strong> Output Feedback DesignWe now adapt the exp<strong>and</strong>ing interior algorithm proposed in [13] to search <strong>for</strong>dynamic output feedback controllers to stabilize (3.14). First we fix the dynamic controller’snumber of states, n ξ ∈ Z + , <strong>and</strong> we define a variable sized regionP β := {[x; ξ] ∈ R n+n ξ| p([x; ξ]) ≤ β}<strong>for</strong> p ∈ Σ n+nξ<strong>and</strong> positive definite. We then defineD := {[x; ξ] ∈ R n+n ξ| V ([x; ξ]) ≤ 1}<strong>for</strong> some as of yet unknown c<strong>and</strong>idate <strong>Lyapunov</strong> function. Following the developments <strong>for</strong>the state feedback case{[x; ξ] ∈ R n+n ξ| p([x; ξ]) ≤ β} ⊆ {[x; ξ] ∈ R n+n ξ| V ([x; ξ]) ≤ 1}{[x; ξ] ∈ R n+n ξ| V ([x; ξ]) ≤ 1} \ {0} ⊆⎧⎡⎤⎪⎨⎫⎪ [x; ξ] ∈ R n+n ξ∇V ∗ ([x; ξ]) ⎢f(x) + g(x)(C(ξ) + D(ξ)h(x))⎬⎥⎣⎦⎪⎩< 0 ⎪∣A(ξ) + B(ξ)h(x)⎭{[x; ξ] ∈ R n+n ξ| V ([x; ξ]) ≤ 0, [x; ξ] ≠ 0} = φ


108with V (0) = 0. Via the Positivstellensatz, maximizing the size of P β subject to the constraintsabove becomesmaxβV ∈ R n+nξ , V (0) = 0, s i ∈ Σ n+nξA ∈ R n ξn ξ, A(0) = 0, B ∈ R n ξ ×pn ξC ∈ R m n ξ, C(0) = 0, D ∈ R m×pn ξs.t.(4.22)V − l 1 ∈ Σ n+nξ− ( (β − p)s 6 + (V − 1) ) ∈ Σ n+nξ⎛⎡⎤ ⎞− ⎜⎝ (1 − V )s 8 + ∇V ∗ ⎢f + gC + gDh⎥⎣⎦ s 9 + l 2⎟⎠ ∈ Σ n+n ξA + BhWe now can propose an algorithm to prove that the fixed point [x; ξ] = 0 is asymptoticallystable <strong>and</strong> estimate its region of attraction by iteratively solving (4.22). Again,<strong>for</strong> the controller synthesis part of the iteration we will have to introduce the intermediatevalue α.Algorithm 10 (Exp<strong>and</strong>ing Interior Output Feedback Design)An iterative searchto exp<strong>and</strong> the region P β <strong>and</strong> thus the region D := {x ∈ R n |V (x) ≤ 1} in Theorem 10 startingfrom a positive definite c<strong>and</strong>idate <strong>Lyapunov</strong> function <strong>and</strong> a n ξ state c<strong>and</strong>idate dynamicoutput feedback controller.Let i be the iteration index <strong>and</strong> set i = 1. Denote the c<strong>and</strong>idate <strong>Lyapunov</strong> functionV (i=0) <strong>and</strong> the c<strong>and</strong>idate output feedback controller A (i=0) , B (i=0) , C (i=0) , D (i=0) . Pick themaximum degree of the <strong>Lyapunov</strong> function, the controller, the SOS multipliers <strong>and</strong> the lpolynomials to be d V , d A , d B , d C , d D , d s6 , d s8 , d s9 <strong>and</strong> d l1 , d l2 respectively. Fix l k ([x; ξ]) =( ∑nɛj=1 xd l kj+ ∑ )n ξj=1 ξd l kj<strong>for</strong> k = 1, 2 <strong>and</strong> some ɛ > 0. Additionally set l (i=0)2 = l 2 <strong>and</strong>


109β (i=0) = 0.1. SOS Weight Iteration:Set V = V (i−1) , A = A (i−1) , B = B (i−1) , C = C (i−1) , D = D (i−1) <strong>and</strong> l 2 = l (i−1)2 .Solve the linesearch on α where s 8 ∈ Σ n+nξ ,d s8<strong>and</strong> s 9 ∈ Σ n+nξ ,d s9maxs 8 ,s 9αs.t.⎛⎡⎤ ⎞− ⎜⎝ (α − V )s 8 + ∇V ∗ ⎢f + gC + gDh⎥⎣⎦ s 9 + l 2⎟⎠ ∈ Σ n+n ξA + BhSet s (i)9 = s 9 . Continue to step 2.2. <strong>Controller</strong> Iteration:Set V = V (i−1) , l 2 = l (i−1)2 <strong>and</strong> s 9 = s (i)9 . Solve the linesearch on α where s 8 ∈ Σ n,dA8 ,A ∈ R n ξn ξ ,d A, A(0) = 0, B ∈ R n ξ×pn ξ ,d B, C ∈ R m n ξ ,d C, C(0) = 0 <strong>and</strong> D ∈ R m×pn ξ ,d DmaxA,B,C,D,s 8αs.t.⎛⎡⎤ ⎞− ⎜⎝ (α − V )s 8 + ∇V ∗ ⎢f + gC + gDh⎥⎣⎦ s 9 + l 2⎟⎠ ∈ Σ n+n ξA + BhSet A (i) = A, B (i) = B, C (i) = C, D (i) = D, s (i)8 = s 8 <strong>and</strong> l (i)2 = l (i−1)2 /α. Continueto step 3.3. <strong>Lyapunov</strong> function iteration:Set A = A (i) , B = B (i) , C = C (i) , D = D (i) , s 8 = s (i)8 , s 9 = s (i)9 <strong>and</strong> l 2 = l (i)2 . Solve


110the linesearch on β where s 6 ∈ Σ n,ds6 <strong>and</strong> V ∈ R n+nξ ,d Vwith V (0) = 0maxV,s 6βs.t.V − l 1 ∈ Σ n+nξ− ( (β − p)s 6 + (V − 1) ) ∈ Σ n+nξ⎛⎡⎤ ⎞− ⎜⎝ (1 − V )s 8 + ∇V ∗ ⎢f + gC + gDh⎥⎣⎦ s 9 + l 2⎟⎠ ∈ Σ n+n ξA + BhSet V (i) = V <strong>and</strong> β (i) = β. Continue to step 4.4. If β (i) − β (i−1) is less than a specified tolerance, the set {x ∈ R n |V (i) (x) ≤ 1} containsP β (i) <strong>and</strong> is the largest estimate of the fixed point’s region of attraction, else incrementi <strong>and</strong> goto step 1.Remark 9 (Properties of the exp<strong>and</strong>ing interior algorithm) :• If the algorithm is started from a feasible point it will always remain feasible.Ifthe system’s linearization is controllable, then any linear controller that stabilizes thelinearized system <strong>and</strong> the corresponding quadratic <strong>Lyapunov</strong> function will always work,since they will stabilize the nonlinear system near to the origin.Once more, thisfeasibility property allows us to consider only a V variant of the algorithm.• Note that, unlike Algorithm 5, V must be positive definite, so it should have no linearterms <strong>and</strong> d Vshould be even.• Since the s’s, <strong>and</strong> the l’s are SOS, they must be of even degree.Additionally, thedegrees need to be chosen so that following relations hold:


111For the first SOS constraintd V = d l1For the second SOS constraintdeg(ps 6 ) ≥ d VFor the third SOS constraint⎛ ⎡⎤ ⎞deg(V s 8 ) ≥ deg ⎜⎝ ∇V ∗ ⎢f + gC + gDh⎥⎣⎦ s ⎟9⎠A + Bhdeg(V s 8 ) = d l24.5.3 Output Feedback Design ExampleWe can demonstrate Algorithms 9 <strong>and</strong> 10 with the double nonlinear spring-massdampersystem from §4.4.3 if we introduce an output function, h. We will consider twooutput functions⎡ ⎤h 1 (x) = ⎢x 3⎥⎣ ⎦x 4<strong>and</strong>h 2 (x) = x 4Both of these output functions provide measurements from the second mass, m 2 , that isnot directly actuated by u.In the example <strong>for</strong> semi-global exponential stability outputfeedback algorithms, we had to use h(x) = [x 1 ; x 2 ], which makes the observations <strong>and</strong>actuation colocated, to make the algorithms feasible.Using the system dynamics given by (4.20) with either h 1 or h 2 above we can finda linear controller <strong>and</strong> quadratic <strong>Lyapunov</strong> function c<strong>and</strong>idate pair by employing the linear


112gap metric design steps shown in §3.5.2.In order to keep the size of the computations down, as well as, pose a challengingcontrol problem, we will fix n ξ = 1. For both algorithms <strong>and</strong> also both output functions wepickp([x; ξ]) = x 1 1 + x 2 2 + x 2 3 + x 2 4 + ξ 2 1We can now solve <strong>for</strong> an output feedback controller <strong>for</strong> (4.20) with each of theoutput functions. For Algorithm 9 set d V = 2, d A = d C = 1, d B = d D = 0, d s2 = d s3 = 2,d s4 = 0, d s6 = 4, d s7 = 2, d s8 = 0, d l1 = d l2 = 4, with dŝ2 = dŝ3 = 2 <strong>and</strong> dŝ4 = 0. ForAlgorithm 10 we use d V = 2, d A = d C = 1, d B = d D = 0, d s6 = d s8 = 2, d s9 = 0, withd l1 = 2 <strong>and</strong> d l2 = 4.In this set up Algorithm 9 will involve searches <strong>for</strong> SOS polynomials in 5 variablesof degree 4, so we will reduce the computational load by setting the tolerance to 0.1. While,since Algorithm 10 has SOS weights in 5 variables of degree 2 we set the tolerance to 0.01.Using h 1After 11 iterations Algorithm 9 converges to β = 11.9 with c max = 25.9, whileAlgorithm 10 reaches the β tolerance limit after 4 iterations at β = 0.22.For clarity, let the controller <strong>and</strong> <strong>Lyapunov</strong> function found by the exp<strong>and</strong>ing Dalgorithm be A d , B d , C d , D d <strong>and</strong> V d . Likewise, let the results from the exp<strong>and</strong>ing interioralgorithm be A i , B i , C i , D i <strong>and</strong> V i . To summarize the results, under the linear controller⎡ ⎤ ⎡⎤A d B d⎢ ⎥⎣ ⎦ = −1.50 −0.22 −1.55⎢⎥⎣⎦D d 3.36 1.37 2.20C d


113the system (4.20) is asymptotically stable <strong>and</strong> the region Ω d := {x ∈ R n |V d (x) ≤ c max }with⎡ ⎤x 1x 2V d ([x; ξ]) =x 3x⎢ 4⎥⎣ ⎦ξ 1∗ ⎡⎢⎣59.64 −8.65 −60.76 −31.19 −70.42−8.65 38.02 28.73 34.70 65.70−60.76 28.73 81.82 59.34 112.81−31.19 34.70 59.34 67.29 104.82−70.42 65.70 112.81 104.82 194.25⎤ ⎡ ⎤x 1x 2x 3x⎥ ⎢ 4⎥⎦ ⎣ ⎦ξ 1is an invariant set contained in the fixed point’s region of attraction. Additionally underthe controller⎡⎢⎣A iC i⎤ ⎡B i⎥⎦ = ⎢⎣D i−0.88 0.70 −1.190.99 0.10 1.06⎤⎥⎦the system (4.20) is asymptotically stable <strong>and</strong> the region Ω i := {x ∈ R n |V i (x) ≤ 1} withV i (x) = 1100⎡ ⎤x 1x 2x 3x⎢ 4⎥⎣ ⎦ξ 1⎤ ⎡ ⎤1.27 −0.03 −0.70 −0.09 −0.41x 1−0.03 1.27 −0.03 1.31 0.90x 2−0.70 −0.03 0.70 0.19 0.21x 3−0.09 1.31 0.19 2.86 1.48x⎢⎥ ⎢ 4⎥⎣⎦ ⎣ ⎦−0.41 0.90 0.21 1.48 1.03 ξ 1∗ ⎡is an invariant set contained in the fixed point’s region of attraction.Comparing the two controllers using h 1 , we find the volume of Ω d is 13.06, whileΩ i has a volume of 22.34 with no containment of Ω d . From this in<strong>for</strong>mation, it would seemthat the controller (A i , B i , C i , D i ) would be preferable, however this comparison is basedon the size of the entire region including the contribution from the controller’s state. Inmost control implementations, we would pick the initial condition ξ 1 = 0, <strong>and</strong> if we look


114at the sizes of the invariant regions, with this restriction on ξ 1 ’s initial condition we findsomething very interesting. The volume of Ω d with ξ 1 = 0 is 6.65, while the volume of Ω iwith ξ 1 = 0 is only 5.80. This reversal in volumes shows that as long as we are starting thecontroller with initial condition ξ 1 = 0, we should use the controller (A d , B d , C d , D d ).Using h 2After 4 iterations Algorithm 9 converges to β = 1.1 with c max = 0.3, while Algorithm10 reaches the β tolerance limit after 4 iterations at β = 0.08.For clarity, let the controller <strong>and</strong> <strong>Lyapunov</strong> function found by the exp<strong>and</strong>ing Dalgorithm be A d , B d , C d , D d <strong>and</strong> V d . Likewise, let the results from the exp<strong>and</strong>ing interioralgorithm be A i , B i , C i , D i <strong>and</strong> V i . To summarize the results, under the linear controller⎡ ⎤ ⎡⎤A d B d⎢ ⎥⎣ ⎦ = −0.66 1.13⎢⎥⎣⎦D d −1.09 0.60C dthe system (4.20) is asymptotically stable <strong>and</strong> the region Ω d := {x ∈ R n |V d (x) ≤ c max }with⎡ ⎤x 1x 2V d ([x; ξ]) =x 3x⎢ 4⎥⎣ ⎦ξ 1∗ ⎡⎢⎣4.88 −0.23 −1.85 −0.58 2.61−0.23 3.05 0.35 1.33 −1.29−1.85 0.35 1.37 0.50 −1.17−0.58 1.33 0.50 1.75 −2.482.61 −1.29 −1.17 −2.48 2.46⎤ ⎡ ⎤x 1x 2x 3x⎥ ⎢ 4⎥⎦ ⎣ ⎦ξ 1is an invariant set contained in the fixed point’s region of attraction. Additionally under


115the controller⎡⎢⎣A iC i⎤ ⎡B i⎥⎦ = ⎢⎣D i−0.92 −2.230.82 0.88⎤⎥⎦the system (4.20) is asymptotically stable <strong>and</strong> the region Ω i := {x ∈ R n |V i (x) ≤ 1} with⎡ ⎤x 1x 2V i (x) =x 3x⎢ 4⎥⎣ ⎦ξ 1∗ ⎡⎢⎣⎤ ⎡ ⎤6.67 −0.31 −1.35 −0.73 −2.81x 1−0.31 5.63 0.70 4.72 2.36x 2−1.35 0.70 1.78 2.61 2.74x 3−0.73 4.72 2.61 5.16 2.70x⎥ ⎢ 4⎥⎦ ⎣ ⎦−2.81 2.36 2.74 2.70 2.60 ξ 1is an invariant set contained in the fixed point’s region of attraction.Comparing the two controllers using h 2 , we find the volume of Ω d is 0.11, while Ω ihas a volume of 1.58 with no containment of Ω d . Again this would seem to imply that thecontroller (A i , B i , C i , D i ) would be preferable. When we look at setting ξ 1 = 0 to reflectthe choice of controller initial condition, the volume of Ω d with ξ 1 = 0 is 0.13, while thevolume of Ω i with ξ 1 = 0 is still larger at 0.72 <strong>and</strong> also it contains the restricted version ofΩ d . Note that these volumes are now in R 4 while the original volumes were in R 5 , so thevolume increase <strong>for</strong> Ω i is legitimate.Since Ω i is larger in both the restricted <strong>and</strong> non-restricted cases, (A i , B i , C i , D i ) isthe preferred controller. Note though, that the volumes <strong>for</strong> the invariant regions using h 2are smaller than those achievable from h 1 , which in turn are smaller than those in the statefeedback case. Clearly, the more in<strong>for</strong>mation that is available, the larger the set of initialconditions <strong>for</strong> which the controller will asymptotically stabilize the system.


1164.6 Chapter SummaryIn this chapter we looked at using SOS optimization to do local system analysis<strong>and</strong> controller synthesis. We derived two algorithms, exp<strong>and</strong>ing D <strong>and</strong> exp<strong>and</strong>ing interior,that are at the heart of our approach to both stability analysis <strong>and</strong> controller synthesis.Additionally they provide ways to optimize the size of positively invariant regions about afixed point.Also, we provided methods to analyze the effects of external disturbances on agiven system, by bounding the system’s unit energy reachable set set, finding the largestpeak disturbance <strong>for</strong> which a given set is invariant, <strong>and</strong> bounding the induced L 2 → L 2gain from disturbance to output on an invariant set.We then used the exp<strong>and</strong>ing D <strong>and</strong> exp<strong>and</strong>ing interior algorithms to design bothstate feedback <strong>and</strong> output feedback controllers.Using these approaches we design statefeedback controllers <strong>for</strong> an example system <strong>and</strong> applied the disturbance analysis techniquesto study the controllers’ ability to reject disturbances. Additionally, we applied our algorithmsto design single state output feedback controllers <strong>for</strong> the same example system withdifficult output functions. As compared to the semi-global exponential stability designs,these local algorithms can stabilize much more difficult systems, but the results only holdon smaller regions of the state space.


117Chapter 5Discrete Time Containment &StabilityUp to this point all of the applications of SOS programming that we have consideredhave been <strong>for</strong> continuous time polynomial systems; now, we will look at similarproblems <strong>for</strong> discrete time polynomial systems. As opposed to the continuous time case, indiscrete time we can easily make arguments about set invariance without using <strong>Lyapunov</strong>functions. However, these set invariance results do not provide any in<strong>for</strong>mation about thesystem’s stability, so we will use a <strong>Lyapunov</strong> function approach to prove stability. Un<strong>for</strong>tunately,due to the way the discrete time <strong>Lyapunov</strong> approach proves stability, we will not beable to use SOS programming to design controllers <strong>for</strong> discrete time polynomial systems.


1185.1 Set Invariance <strong>for</strong> Discrete Time SystemsConsider the systemx k+1 = f(x k ) (5.1)with f ∈ R n n <strong>and</strong> f(0) = 0. We would like to know if a region of the state space X ⊂ R n isinvariant under the action of f. We know that X is invariant under f if x ∈ X ⇒ f(x) ∈ X.If we define X with a semialgebraic setX := {x ∈ R n |p(x) ≤ β}we can check invariance with the following set containment constraint{x ∈ R n | p(x) ≤ β} ⊆ {x ∈ R n | p(f(x)) ≤ β} (5.2)We can check this constraint with a generalized S-procedure from §2.2.1, which asks if thereexists s ∈ Σ n such that(β − p ◦ f) − s(β − p) ∈ Σ n (5.3)To have a chance at making this constraint feasible we will need to pick s such thatdeg(sp) ≥ deg(p ◦ f)which insures that the positive term sp has the highest degree. Additionally, we can lookto find the largest invariant region by adding a cost function of β to the feasibility problem(5.3).5.1.1 Set Invariance ExampleConsider the systemx k+1 = −x 4 k + x2 k


119The system has fixed points at x = {0, −1.325}, <strong>and</strong> its linearization about the origin givesno in<strong>for</strong>mation about the system. Fixing p(x) = x 2 , we would like to find the largest valueof β such that the constraint (5.3) holds. With d s = 6, making both terms in (5.3) of degree8, we find the largest value of β to be 1.75, which implies that the set{x ∈ R| − 1.322 ≤ x ≤ 1.322}is invariant under the system’s dynamics.Since this region pushes up against the fixedpoint, we know that we have indeed exp<strong>and</strong>ed the set {x ∈ R|p(x) ≤ β} as much as ispossible.5.1.2 Set Invariance under DisturbancesThe check above is useful to determine if a region is invariant under a system’sdynamics, however, it is often more useful to know if the region remains invariant whena disturbance is introduced into the system. For example, in the receding horizon controlstrategy put <strong>for</strong>ward in [14], one of the requirements <strong>for</strong> it to work is that there must existregion of state space that is invariant under the action of the dynamics <strong>and</strong> some set ofdisturbances.We can exp<strong>and</strong> the previous invariance check to include external disturbances bychecking if the systemx k+1 = f(x k , w k ) (5.4)with w k ∈ W ⊂ R m , f ∈ R n n+m <strong>and</strong> f(0, 0) = 0 satisfiesx ∈ X, w ∈ W ⇒ f(x, w) ∈ X


120If we describe W with a semialgebraic setW := {w ∈ R m |q(w) ≤ γ}then we can write an analog of (5.2){x ∈ R n , w ∈ R m |p(x) ≤ β} ∩ {x ∈ R n , w ∈ R m |q(w) ≤ γ}⊆ {x ∈ R n , w ∈ R m |p(f(x, w)) ≤ β}which, again, can be checked with a generalized S-procedure, from §2.2.1,∃s 1 , s 2 ∈ Σ n+ms.t.(5.5)(β − p ◦ f) − s 1 (β − p) − s 2 (γ − q) ∈ Σ n+mWe can also add a cost function to (5.5) to find the maximum allowable disturbance(max γ), the smallest invariant set (min β), or any other combination of these objectives.However, <strong>for</strong> feasibility we will need the highest degree SOS term to be positive ormax {deg(s 1 p), deg(s 2 q)} ≥ deg(p ◦ f)This inequality presents us again with potentially very large optimization problems, sinces 1 , s 2 have potentially large degree <strong>and</strong> are now polynomials in both x <strong>and</strong> w.5.1.3 Set Invariance under Disturbances ExampleConsider the previous containment example, except now subject it to a disturbancex k+1 = −x 4 k + x2 k + w k


121Fixing p(x) = x 2 , <strong>and</strong> q(w) = w 2 , we will fix values of β ∈ [0, 1.75] <strong>and</strong> <strong>for</strong> each of thesevalues we will search <strong>for</strong> the largest value of γ such that the constraint (5.5) holds.Additionally, since w enters the dynamics linearly, we can solve <strong>for</strong> the exact valueof γ <strong>for</strong> a given β. Noting that x 2 ≤ β <strong>and</strong> w 2 ≤ γ are equivalent to |x| ≤ √ β <strong>and</strong> |w| ≤ √ γ,respectively, we know that <strong>for</strong> |x| ≤ √ β the invariance relation becomes− √ β + √ γ ≤ −x 4 + x 2 ≤ √ β − √ γThus we can solve <strong>for</strong> the exact value of γ withγ(β) =(max(√β − max|x|≤ √ β∣∣−x 4 + x 2∣ )) 2∣ , 0We can now compare the SOS programming approach to the exact results. Figure5.1 gives the exact results with a solid line, while the largest values of γ using the optimizationapproach of (5.5), with d s1 = d s2 = 6, are plotted as stars. From the plot, we can tellthat the SOS approach provides a very good lower bound <strong>and</strong> that we should pick β ≈ 1.2to maximize the magnitude of the disturbance that the system can reject while remainingin its original invariant set.5.2 Discrete Time Stability BackgroundRemembering that the system under consideration is (5.1)x k+1 = f(x k )with f ∈ R n n <strong>and</strong> f(0) = 0, we can <strong>for</strong>m a <strong>Lyapunov</strong> argument <strong>for</strong> asymptotic stabilitywith the following theorem. The discrete time version of the asymptotic stability theorem


1220.80.70.60.5γ0.40.30.20.100 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8βFigure 5.1: The maximal γ <strong>for</strong> which (5.5) is feasible <strong>for</strong> given β are plotted as stars, whilethe solid line shows the exact results.is identical to the continuous time one, Theorem 10, except that the conditions on ˙V havebeen replaced with conditions on (V ◦ f − V ).Theorem 11 Let D ⊂ R n be a domain containing the equilibrium point x = 0 of the system(5.1). Let V : D → R be a continuously differentiable function such thatV (0) = 0V (x) > 0 on D \ {0}<strong>and</strong>V (f(x)) − V (x) < 0 on D \ {0}then the system (5.1) is asymptotically stable about x = 0. Moreover, any region Ω β :={x ∈ R n |V (x) ≤ β} such that Ω β ⊆ D describes an positively invariant region contained inthe equilibrium point’s domain of attraction.


124Let i be the iteration number starting at one.Denote the c<strong>and</strong>idate <strong>Lyapunov</strong>function V (i=0) <strong>and</strong> pick the maximum degree of the <strong>Lyapunov</strong> function, the SOS multipliers<strong>and</strong> the l polynomials to be d V , d s2 , d s3 , d s4 , d s6 , d s7 , d s8 <strong>and</strong> d l1 , d l2 respectively. Pick themaximum degrees <strong>for</strong> the the level set maximization problem (4.2) <strong>and</strong> denote them dŝ2 , dŝ3<strong>and</strong> dŝ4 . Fix l i (x) = ɛ ∑ nj=1 xd l ij<strong>for</strong> i = 1, 2 <strong>and</strong> some ɛ > 0. Additionally set β (i=0) = 0.1. Set V = V (i−1) <strong>and</strong> solve the linesearch on β where s 2 ∈ Σ n,ds2 , s 3 ∈ Σ n,ds3 , s 4 ∈Σ n,ds4 , s 6 ∈ Σ n,ds6 , s 7 ∈ Σ n,ds7 <strong>and</strong> s 8 ∈ Σ n,ds8maxs 2 ,s 3 ,s 4 ,s 6 ,s 7 ,s 8βs.t.−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1−(β − p)s 6 − (V ◦ f − V )s 7 − (V ◦ f − V )(β − p)s 8 − l 2∈∈Σ nΣ n(5.6)Set s (i)3 = s 3 , s (i)4 = s 4 , s (i)7 = s 7 <strong>and</strong> s (i)8 = s 8 . Continue to step 2.2. Set s 3 = s (i)3 , s 4 = s (i)4 , s 7 = s (i)7 <strong>and</strong> s 8 = s (i)8 . Solve the linesearch on β whereV ∈ R n,dV with V (0) = 0, s 2 ∈ Σ n,ds2 <strong>and</strong> s 6 ∈ Σ n,ds6maxV,s 2 ,s 6βs.t.−(β − p)s 2 + V s 3 + V (β − p)s 4 − l 1−(β − p)s 6 − (V ◦ f − V )s 7 − (V ◦ f − V )(β − p)s 8 − l 2∈∈Σ nΣ n(5.7)Set β (i) = β <strong>and</strong> V (i) = V . If β (i) − β (i−1) is less than a specified tolerance go to step3, else increment i <strong>and</strong> go to step 1.3. Set V = V (i) <strong>and</strong> β = β (i) . Solve the linesearch on c where ŝ 2 ∈ Σ n,dŝ2 , ŝ 3 ∈ Σ n,dŝ3


125<strong>and</strong> ŝ 4 ∈ Σ n,dŝ4maxŝ 2 ,ŝ 3 ,ŝ 4cs.t.−ŝ 2 (c − V ) − ŝ 3 (p − β) − ŝ 4 (p − β)(c − V ) − (p − β) 2 ∈ Σ nSet c max = c. The set {x ∈ R n |V (x) ≤ c max } is the resulting estimate of the regionof attraction, <strong>for</strong> it is positively invariant, contained within D <strong>and</strong> all of its pointsconverge to the fixed point x = 0.Remark 10 (Properties of the exp<strong>and</strong>ing D algorithm) :• If the algorithm is started from a feasible point it will always remain feasible. If thesystem’s linearization is stable, then the linearization’s quadratic <strong>Lyapunov</strong> functionwill always work, since it will stabilize the nonlinear system near to the origin.• Note that V need not be positive definite, since it is required to be positive only onD \ {0}. However, to be positive over a region about the origin V must have no linearterms <strong>and</strong> d Vshould be even. Clearly if V were chosen to be positive definite, thenthe first constraint in (5.6) <strong>and</strong> (5.7) can just become V − l 1 ∈ Σ n .• Since the s’s, l’s <strong>and</strong> the ŝ’s are SOS, they must be of even degree. Additionally, thedegrees need to be chosen so that following relations hold:For the first SOS constraintmax{deg(ps 2 ), deg(V s 3 )} ≥ d l1max{deg(ps 2 ), deg(V s 3 )} ≥ deg(V ps 4 )


126For the second SOS constraintdeg(ps 6 ) ≥ d l2deg(ps 6 ) ≥ max{deg((V ◦ f − V )s 7 ), deg((V ◦ f − V )ps 8 )}For the c maximization constraintmax{deg(ŝ 2 V ), deg(ŝ 4 pV )} ≥ deg(p 2 )max{deg(ŝ 2 V ), deg(ŝ 4 pV )} ≥ deg(ŝ 3 p)5.3.2 Exp<strong>and</strong>ing Interior AlgorithmWe can also adapt Algorithm 6 <strong>for</strong> discrete time systems by following the stepsthat we used to derive it in §4.2.2. Define a variable sized regionP β := {x ∈ R n |p(x) ≤ β}<strong>for</strong> p ∈ Σ n , positive definite, <strong>and</strong> we estimate the region of attraction by maximizing βsubject to the constraint that all of the points in P βconverge to the origin under thesystem’s dynamics. Following Theorem 11, if we defineD := {x ∈ R n |V (x) ≤ 1}<strong>for</strong> an unknown c<strong>and</strong>idate <strong>Lyapunov</strong> function, then P β must be contained D. Additionallywe need,{x ∈ R n |V (x) ≤ 1} \ {0} ⊆ {x ∈ R n |V (f(x)) − V (x) < 0}Since V is unknown, to ensure that V is positive on D \ {0} we require that V be positiveeverywhere away from x = 0. With this set of requirements, we can propose the discretetime version of the exp<strong>and</strong>ing interior algorithm.


127Algorithm 12 (Exp<strong>and</strong>ing Interior) An iterative search to exp<strong>and</strong> the region P β <strong>and</strong>thus the region D := {x ∈ R n |V (x) ≤ 1} in Theorem 10 starting from a positive definitec<strong>and</strong>idate <strong>Lyapunov</strong> function.Let i be the iteration number starting at one.Denote the c<strong>and</strong>idate <strong>Lyapunov</strong>function V (i=0) <strong>and</strong> pick the maximum degree of the <strong>Lyapunov</strong> function, the SOS multipliers<strong>and</strong> the l polynomials to be d V , d s6 , d s8 , d s9 <strong>and</strong> d l1 , d l2 respectively. Fix l i (x) = ɛ ∑ nj=1 xd l ij<strong>for</strong> i = 1, 2 <strong>and</strong> some ɛ > 0. Additionally set β (i=0) = 0.1. Set V = V (i−1) <strong>and</strong> solve the linesearch on β where s 6 ∈ Σ n,ds6 , s 8 ∈ Σ n,ds8 <strong>and</strong>s 9 ∈ Σ n,ds9maxs 6 ,s 8 ,s 9β(5.8)− ( (β − p)s 6 + (V − 1) ) ∈ Σ nSet s (i)8 = s 8 <strong>and</strong> s (i)9 = s 9 . Continue to step 2.s.t.2. Set s 8 = s (i)8 <strong>and</strong> s 9 = s (i)9 . Solve the linesearch on β where V ∈ R n,d Vwith V (0) = 0<strong>and</strong> s 6 ∈ Σ n,ds6maxV,s 6βs.t.V − l 1 ∈ Σ n(5.9)− ( )(1 − V )s 8 + (V ◦ f − V )s 9 + l 2 ∈ Σn− ( (β − p)s 6 + (V − 1) ) ∈ Σ n


128Set β (i) = β <strong>and</strong> V (i) = V . If β (i) − β (i−1) is less than a specified tolerance goto step3, else increment i <strong>and</strong> go to step 1.3. The set {x ∈ R n |V (i) (x) ≤ 1} contains P β (i) <strong>and</strong> is the largest estimate of the fixedpoint’s region of attraction.Remark 11 (Properties of the exp<strong>and</strong>ing interior algorithm) :• If the algorithm is started from a feasible point it will always remain feasible. If thesystem’s linearization is stable, then the linearization’s quadratic <strong>Lyapunov</strong> functionwill always work, since it will stabilize the nonlinear system near to the origin.• Note that, unlike Algorithm 11, V must be positive definite, so it should have no linearterms <strong>and</strong> d Vshould be even.• Since the s’s, <strong>and</strong> the l’s are SOS, they must be of even degree.Additionally, thedegrees need to be chosen so that following relations hold:For the first SOS constraintd V = d l1For the second SOS constraintdeg(ps 6 ) ≥ d VFor the third SOS constraintdeg(V s 8 ) ≥ deg((V ◦ f − V )s 9 )deg(V s 8 ) = d l2


1295.3.3 Estimating the Region of Attraction ExampleTo compare the effectiveness of Algorithm 11 (Exp<strong>and</strong>ing D) <strong>and</strong> Algorithm 12(Exp<strong>and</strong>ing Interior) consider estimating the region of attraction <strong>for</strong> the following system,a sampled version of the damped pendulum system (4.11),⎡ ⎤⎢x 1⎥⎣ ⎦x 2k+1⎡ ⎤= ⎢x 1⎥⎣ ⎦x 2k⎡+ t s⎢⎣− 810 x 2 −x 2(x 1 − x3 16)⎤⎥⎦k(5.10)where t s is the sampling time. As with the continuous time version, the system has threeequilibrium points: (0, 0) <strong>and</strong> (± √ 6, 0) <strong>and</strong> the linearization about the first is stable whilethe others are not. Following the continuous time version we pickp(x) = x 2 1 + x 2 2<strong>and</strong> we use the system’s linearization A f to construct a quadratic <strong>Lyapunov</strong> function V (x) =x ∗ P x whereSolving this feasibility problem we findP ≻ 0A ∗ f P A f − P ≺ 0⎡⎤P = ⎢2.07 0.87⎥⎣ ⎦0.87 2.13which we will use to <strong>for</strong>m the c<strong>and</strong>idate <strong>Lyapunov</strong> function.If we set the stopping tolerances to β (i) − β (i−1) = .01, t s = 110, <strong>and</strong> set the degreesas below, we can compare Algorithms 11 <strong>and</strong> 12 with their continuous time counterparts,Algorithms 5 <strong>and</strong> 6 . For the exp<strong>and</strong>ing D algorithm, set d V = 2, d s2 = d s3 = d s7 = 2,


130d s4 = d s8 = 0, d s6 = 6, d l1 = 4 <strong>and</strong> d l2 = 8. For the exp<strong>and</strong>ing interior algorithm setd V = 2, d s8 = 4, d s6 = d s9 = 0, d l1 = 2 <strong>and</strong> d l2 = 6.The exp<strong>and</strong>ing D algorithm converges to have a radius of √ β = 2.41 after 2iterations, while the exp<strong>and</strong>ing interior algorithm converges so that P β has a radius of √ β =2.05 after 12 iterations. These numbers are essentially the same as in the continuous timecase, which shows that the exp<strong>and</strong>ing regions are both as large as possible. Additionally theinvariant regions given by the <strong>Lyapunov</strong> function level sets that contain <strong>and</strong> are containedby P β <strong>and</strong> D, respectively, are visually identical their continuous time analogs shown inFigure 4.1.Since the discrete time <strong>and</strong> continuous time results line up almost perfectly we cansee that the <strong>Lyapunov</strong> design techniques work in similar manners <strong>for</strong> both. In the continuoustime case, it was straight <strong>for</strong>ward to extend the <strong>Lyapunov</strong> design algorithms to includecontroller design, <strong>and</strong> we would like to extend this discrete/continuous stability parallelto controller design.However, we can not extend these techniques to the discrete timecontroller synthesis problem. Since we would be searching <strong>for</strong> K to make (V ◦(f +gK)−V )negative on some region, which is not linear in the coefficients of K even when V is fixed,we would not have an SOS problem.5.4 Chapter SummaryIn this chapter we investigated using SOS optimization techniques to answer systemtheoretic questions <strong>for</strong> discrete time systems with polynomial system maps. First, webroke with the continuous time approach by studying set invariance without using a Lya-


131punov function. We illustrated a way to directly show that a given set is invariant under thesystem’s dynamics as well as under a bounded disturbance. Simple examples were provided<strong>for</strong> both approaches to demonstrate their utility. In the set invariance under disturbancesexample we also found the exact solution to the given problem to illustrate the quality ofour approach.We then extended the local stability algorithms, exp<strong>and</strong>ing D <strong>and</strong> exp<strong>and</strong>ing interior,to discrete time systems. We used a sampled version of the continuous time localstability example to show that the algorithms function in essentially the same manner. Additionally,we showed the fundamental limitation of our <strong>Lyapunov</strong> based SOS optimizationapproach when it is applied to discrete time systems, which keeps us from being able tomove from investigating a system’s stability to designing controllers <strong>for</strong> it.


132Chapter 6Conclusions <strong>and</strong> RecommendationsThis thesis considered the problem of using a <strong>Lyapunov</strong> based approach to analyzethe stability <strong>and</strong> per<strong>for</strong>mance of polynomial systems <strong>and</strong> to synthesize polynomialstate feedback <strong>and</strong> output feedback controllers. The polynomial <strong>Lyapunov</strong> approach wasmade computationally tractable by invoking the Positivstellensatz to <strong>for</strong>m suitable sufficientconditions that could be iteratively solved with convex optimization.In Chapter 3, we illustrated a iterative approach to finding <strong>Lyapunov</strong> functions <strong>and</strong>controllers to prove that a closed loop polynomial system was semi-globally exponentiallystable. Additionally, we provided a <strong>Lyapunov</strong> base approach to bound the system’s inducedL 2 gain from disturbance to output. However the global nature of these results somewhatlimits their application, since in many cases the system either has fixed points away fromthe origin or is a model of an underlying system that is only valid on a fixed region of statespace.In light of these restrictions, we developed two different algorithmic approaches


133to do localized analysis <strong>and</strong> controller synthesis in Chapter 4. The Exp<strong>and</strong>ing D approachworks to find the largest estimate of the system’s region of attraction by exp<strong>and</strong>ing the regionwhere the <strong>Lyapunov</strong> conditions hold, while the Exp<strong>and</strong>ing Interior approach searches<strong>for</strong> an invariant set that contains an exp<strong>and</strong>ing region. The algorithms both have strengths<strong>and</strong> weaknesses <strong>and</strong> these are often complimentary. Also, we exp<strong>and</strong>ed our approach todisturbance analysis by providing bounds on the reachable set under a unit energy disturbance,proving set invariance under peak bounded disturbances <strong>and</strong> introducing a localinduced L 2 gain bound.We then extended the local continuous time stability results, using the two differentapproaches, to discrete time systems in Chapter 5. Due to the nature of the <strong>Lyapunov</strong>theorem in discrete time, we were unable to use similar methods to do controller design.However, due to the discrete time set up we were able to find non-<strong>Lyapunov</strong> based conditionsto insure that a region of initial conditions was invariant under the system’s dynamics <strong>and</strong>a given set of disturbances.The results presented in this thesis leave several topics open <strong>for</strong> future research.Two of them are discussed below• Integrate Per<strong>for</strong>mance Measures into <strong>Controller</strong> Design:In this thesis, we have constructed a series of algorithms that find <strong>Lyapunov</strong> functions<strong>and</strong> controllers to demonstrate either global or local stability, as well as techniques<strong>for</strong> analyzing a system’s per<strong>for</strong>mance under disturbances.However, we have notintegrated these per<strong>for</strong>mance measures into the controller design procedures.If controller design <strong>and</strong> per<strong>for</strong>mance analysis were per<strong>for</strong>med simultaneously, then it


134would be possible to use the design algorithm to optimize some aspect of the system’sper<strong>for</strong>mance, instead of just checking the per<strong>for</strong>mance after the controller is designed.With a joint approach to these problems we could provide a valuable tool that wouldallow us to design robust controllers <strong>for</strong> polynomial systems with built-in per<strong>for</strong>manceguarantees.• Minimize the Number of Non-Zero <strong>Controller</strong> TermsIf we were to design a high degree polynomial controller, we would be confrontedwith the problem of trying to implement a hugely complex polynomial with a largenumber of terms. Are all these terms necessary, or can some of them be set to zero?Clearly, we can run the design algorithm once including all of the monomials, afterwhich we can run the algorithm again with all of the monomials that have smallcoefficients removed.In the global state feedback controller design example, thistechnique worked, but this approach does not guarantee that the given controller hasthe fewest number of non-zero terms. If we could find a way to setup the controllerdesign algorithm to minimize the number of non-zero controller terms, then we could,possibly greatly, reduce the complexity of the controller implementation.


135Bibliography[1] C. Berg. The multidimensional moment problem <strong>and</strong> semigroups. In Proceedings ofSymposia in Applied Mathematics, volume 37, pages 110–124, 1987.[2] J. Bernussou, P. Peres, <strong>and</strong> J. Geromel. A linear programming oriented procedure<strong>for</strong> quadratic stabilization of uncertain systems. Systems & Control Letters, 13:65–72,1989.[3] J. Bochnak, M. Coste, <strong>and</strong> M-F. Roy. Géométrie Algébrique Réelle. Springer, Berlin,1986.[4] N. Bose <strong>and</strong> C. Li. A quadratic <strong>for</strong>m representation of polynomials of several variables<strong>and</strong> its applications. IEEE Transactions on Automatic Control, 14:447–448, 1968.[5] S. Boyd, L. El Ghaoui, E. Feron, <strong>and</strong> V. Balakrishnan. Linear Matrix Inequalities inSystem <strong>and</strong> Control Theory. SIAM, Philadelphia, 1994.[6] C. Caramanis. Non-convex optimization via real algebraic geometry. Available fromhttp://web.mit.edu/6.962/www/www fall 2001/schedule.html, 2002.


136[7] M. Choi, T. Lam, <strong>and</strong> B. Reznick. Sums of squares of real polynomials. In Proceedingsof Symposia in Pure Mathematics, volume 58(2), pages 103–126, 1995.[8] J. Doyle <strong>and</strong> C. Chu. Matrix interpolation <strong>and</strong> H ∞ per<strong>for</strong>mance bounds. In Proceedingsof the American Control Conference, pages 129–134, 1985.[9] K. Gatermann <strong>and</strong> P. Parrilo. Symmetry groups, semidefinite programs, <strong>and</strong> sums ofsquares. Technical Report arXiv:math.AC/0211450, ETH Zurich, 2002.[10] T. Georgiou <strong>and</strong> M. Smith. Optimal robustness in the gap metric. IEEE Transactionson Automatic Control, 35(6):673–686, 1990.[11] W. Hahn. Stability of Motion. Springer-Verlag, 1967.[12] D. Henrion <strong>and</strong> J. Lasserre. GloptiPoly: Global optimization over polynomials withMatlab <strong>and</strong> SeDuMi. In Proceedings of the Conference on Decision <strong>and</strong> Control, pages747–752, 2002.[13] Z. Jarvis-Wloszek, R. Feeley, W. Tan, K. Sun, <strong>and</strong> A. Packard. Some controls applicationsof sum of squares programming. In Proceedings of the Conference on Decision<strong>and</strong> Control, pages 4676–4681, 2003.[14] Z. Jarvis-Wloszek, D. Philbrick, M. Kaya, A. Packard, <strong>and</strong> G. Balas. Control with disturbancepreview <strong>and</strong> on-line optimization. IEEE Transactions on Automatic Control,to appear.[15] T. Johansen. Computation of <strong>Lyapunov</strong> functions <strong>for</strong> smooth nonlinear systems usingconvex optimization. Automatica, 36:1617–1626, 2000.


137[16] H. Khalil. Nonlinear Systems. Prentice Hall, Upper Saddle River, NJ, 2nd edition,1996.[17] P. Kokotović. Constructive nonlinear control: Progress in the 90’s. In Proceedings ofthe IFAC World Congress, pages 49–77, 1999.[18] J. Lasserre. Global optimization with polynomials <strong>and</strong> the problem of moments. SIAMJournal of Optimization, 11(3):796–817, 2000.[19] T. Motzkin. The arithmetic-geometric inequality. In Inequalities, Proceedings of theSymposium at Wright-Patterson AFB, 1965, pages 202–224. Academic Press, 1967.[20] A. Papachristodoulou <strong>and</strong> S. Prajna. On the construction of <strong>Lyapunov</strong> functions usingthe sum of squares decomposition. In Proceedings of the Conference on Decision <strong>and</strong>Control, pages 3482–3487, 2002.[21] P. Parrilo. Semidefinite programming relaxations <strong>for</strong> semialgebraic problems. MathematicalProgramming Ser. B, 96:2:293–320, 2003.[22] V. Powers <strong>and</strong> T. Wörmann. An algorithm <strong>for</strong> sums of squares of real polynomials.Journal of pure <strong>and</strong> applied algebra, 127:99–104, 1998.[23] S. Prajna, A. Papachristodoulou, <strong>and</strong> P. Parrilo. Introducing SOSTOOLS: A generalpurpose sum of squares programming solver.In Proceedings of the Conference onDecision <strong>and</strong> Control, pages 741–746, 2002.[24] S. Prajna, A. Papachristodoulou, <strong>and</strong> P. A. Parrilo. SOSTOOLS:


138Sum of squares optimization toolbox <strong>for</strong> Matlab, 2002. Available fromhttp://www.cds.caltech.edu/sostools.[25] M. Putinar. Positive polynomials on compact semialgebraic sets. Indiana UniversityMathematical Journal, 42:969–984, 1993.[26] A. Rantzer. On convexity in stabilization of nonlinear systems. In Proceedings of theConference on Decision <strong>and</strong> Control, pages 2941–2945, 2000.[27] A. Rantzer. A dual to <strong>Lyapunov</strong>’s stability theorem. Systems & Control Letters,42:161–168, 2001.[28] B. Reznick. Extremal psd <strong>for</strong>ms with few terms. Duke Mathematical Journal, 45:363–374, 1978.[29] B. Reznick. Some concrete aspects of Hilbert’s 17th problem. In Contemporary Mathematics,volume 253, pages 251–272, 2000.[30] K. Schmüdgen. The k-moment problem <strong>for</strong> compact semialgebraic sets. MathematischeAnnalen, 289:203–206, 1991.[31] G. Stengle. A nullstellensatz <strong>and</strong> a positivstellensatz in semialgebraic geometry. MathematischeAnnalen, 207:87–97, 1974.[32] G. Stengle. Complexity estimates <strong>for</strong> the Schmüdgen postivstellensatz. Journal ofComplexity, 12:167–174, 1996.[33] J. Sturm. Using SeDuMi 1.02, a Matlab toolbox <strong>for</strong> optimizaton over symmetric


139cones. Optimization Methods <strong>and</strong> Software, 11-12:625–653, 1999. Available fromhttp://fewcal.cub.nl/sturm/software/sedumi.html.[34] S. Tzafestas <strong>and</strong> K. Anagnostou. Stabilization of singularly perturbed strictly bilinearsystems. IEEE Transactions on Automatic Control, 10:943–946, 1984.[35] L. V<strong>and</strong>enberghe <strong>and</strong> S. Boyd. Semidefinite programming. SIAM Review, 38(1):49–95,1996.[36] A. Zelentsovsky. Nonquadratic <strong>Lyapunov</strong> functions <strong>for</strong> robust stability analysis of linearuncertain systems. IEEE Transactions on Automatic Control, 39(1):135–138, 1994.


140Appendix ASemidefinite ProgrammingSemidefinite programming (SDP) considers the following optimization problemminx∈R nc ∗ xs.t. F (x) := F 0 + x 1 F 1 + · · · + x n F n ≽ 0with c ∈ R n , <strong>and</strong> F i = F ∗i∈ R p×p <strong>for</strong> i = 0, 1, . . . , n. The most important theoretical propertyof SDP is its convexity, which allows it to be solved numerically with great efficiently[33]. SDP has many other useful theoretical properties that are covered in detail in thesurvey [35].Often SDP is used to solve the feasibility problem: does there exist x ∈ R n suchthat F (x) ≽ 0? The generalized inequality F (x) ≽ 0 is linear, or strictly speaking affine,in x, so the feasibility problem is often referred to as a linear matrix inequality (LMI). Oneproperty of LMIs is that a set of them can be turned into single larger block diagonal LMI.


141Example 4 Consider the LMIs F (x) ≽ 0 <strong>and</strong> G(x) ≽ 0 they can be written as⎡⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢G(x)⎥⎣⎦ = ⎢G 0⎥⎣ ⎦ + x ⎢G 1⎥1 ⎣ ⎦ + · · · + x ⎢G n⎥n ⎣ ⎦ ≽ 0F (x) F 0 F 1 F nTo illustrate a less obvious use of LMIs consider the following <strong>Lyapunov</strong> stability argument.Example 5 Consider the linear system ẋ = Ax with x ∈ R n <strong>and</strong> the <strong>Lyapunov</strong> functionV (x) = x ∗ P x, with P unknown. If a P can be picked such that V (x) is positive definite <strong>and</strong>− ˙V (x) is positive semidefinite, then the system is stable. Noting that ˙V (x) = x ∗ (A ∗ P +P A)x, the problem can be posed as: does there exist a P such thatP ≻ 0−(A ∗ P + P A) ≽ 0If P is represented on the st<strong>and</strong>ard basis as ∑ mi=1 p iE i , with m := (n + 1)n/2, then theLMIs above become⎡∑ m i=1⎢p iE i − ɛI⎣−(A ∗ ∑ mi=1 p iE i + ∑ ⎥⎦ ≽ 0mi=1 p iE i A)⎤<strong>for</strong> any ɛ > 0. The question of what value to use <strong>for</strong> ɛ can also be incorporated to give thefollowing SDP in the variables ɛ, p 1 , . . . , p mmaxp 1 ,...,p mɛ⎡⎤⎡⎤ ⎡s.t. p 1⎢E 1 ⎥⎣⎦ + . . . + p ⎢E m ⎥m ⎣⎦ + ɛ ⎢−I⎣−(A∗E 1 + E 1 A)−(A∗E m + E m A)⎤⎥⎦ ≽ 00which, if the maximum value <strong>for</strong> ɛ is strictly positive, gives a matrix P such that the <strong>Lyapunov</strong>function V demonstrates the stability of A.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!