24.08.2013 Views

NONLINEAR CONTROLLER COMPARISON ON A BENCHMARK ...

NONLINEAR CONTROLLER COMPARISON ON A BENCHMARK ...

NONLINEAR CONTROLLER COMPARISON ON A BENCHMARK ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>N<strong>ON</strong>LINEAR</strong> <strong>C<strong>ON</strong>TROLLER</strong> <strong>COMPARIS<strong>ON</strong></strong> <strong>ON</strong> A<br />

<strong>BENCHMARK</strong> SYSTEM<br />

by<br />

J. Willard Curtis III<br />

A thesis submitted to the faculty of<br />

Brigham Young University<br />

in partial ful llment oftherequirements for the degree of<br />

Master of Science<br />

Department ofElectrical and Computer Engineering<br />

Brigham Young University<br />

April 2000


Copyright c 2000 J. Willard Curtis III<br />

All Rights Reserved


BRIGHAM YOUNG UNIVERSITY<br />

GRADUATE COMMITTEE APPROVAL<br />

of a thesis submitted by<br />

J. Willard Curtis III<br />

This thesis has been read by each member of the following graduate committee and<br />

by majority vote has been found to be satisfactory.<br />

Date Randy W. Beard, Chair<br />

Date Wynn Stirling<br />

Date Timothy McLain


BRIGHAM YOUNG UNIVERSITY<br />

As chair of the candidate's graduate committee, I have read the thesis of J. Willard<br />

Curtis III in its nal form and have found that (1) its format, citations, and bibliographical<br />

style are consistent and acceptable and ful ll university and department<br />

style requirements (2) its illustrative materials including gures, tables, and charts<br />

are in place and (3) the nal manuscript is satisfactory to the graduate committee<br />

and is ready for submission to the university library.<br />

Date Randy W. Beard<br />

Chair, Graduate Committee<br />

Accepted for the Department<br />

Accepted for the College<br />

A. Lee Swindlehurst<br />

Graduate Coordinator<br />

Douglas M. Chabries<br />

Dean, College of Engineering and Technology


ABSTRACT<br />

<strong>N<strong>ON</strong>LINEAR</strong> <strong>C<strong>ON</strong>TROLLER</strong> <strong>COMPARIS<strong>ON</strong></strong> <strong>ON</strong> A <strong>BENCHMARK</strong> SYSTEM<br />

J. Willard Curtis III<br />

Department ofElectrical and Computer Engineering<br />

Master of Science<br />

The quest for practical, robust, and e ective nonlinear feedback controllers<br />

has been an area of active research in recent years. One of the challenges in this<br />

research ishowtoevaluate the performance of new nonlinear control strategies, given<br />

that the set of nonlinear systems is so varied. A benchmark problem for nonlinear<br />

systems has been proposed, in order to provide a standard testbed for newly devel-<br />

oped, nonlinear control algorithms. This benchmark problem is an ideal system on<br />

which to apply the Successive Galerkin Approximation to the optimal nonlinear full-<br />

state feedback problem. The Successive Galerkin Approximation (SGA) technique<br />

provides an approximation to the solution of the Hamilton-Jacobi equations associ-<br />

ated with optimal nonlinear control theory. The main contribution of this thesis is<br />

the comparison of the SGA algorithm to four other control methodologies, each of<br />

which is implemented on a hardware system that can be modeled as the nonlinear<br />

benchmark problem. The results show that the SGA algorithms provide excellent<br />

performance and good robustness properties when applied to this benchmark system,<br />

outperforming a simple passivity-based control, two standard linearized controls, and<br />

a simpli ed backstepping control.


ACKNOWLEDGMENTS<br />

I would like to acknowledge my advisor Dr. Beard for all of his assistance<br />

and guidance in the direction of the research that culminated in the thesis. He has<br />

been a wonderful mentor, always willing to help me understand the intricacies of the<br />

mathematics and the subtleties of the eld of robust control. He has always been<br />

supportive of my ideas and a useful source of knowledge and experience.<br />

I would also like toacknowledge the encouragement I received from my other<br />

committee members Dr. Mclain and Dr. Stirling, and for their support of my en-<br />

deavors.<br />

I thank my family and friends wholeheartedly: I'm lucky to have a kind and<br />

patient family, and I'm indebted to them for their love and support, and for their<br />

quiet encouragement of my studies. Thanks to my friends also, for your fellowship.<br />

Especially, I'd like to thank Miguel Apeztegia for his friendship and support, and his<br />

aide in preparing this thesis.


Contents<br />

Acknowledgments vi<br />

List of Tables ix<br />

List of Figures xii<br />

1 Introduction 1<br />

1.1 Motivation and Problem Description . . . . . . . . . . . . . . . . . . 1<br />

1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3<br />

1.3 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4<br />

2 Plant Speci cations and Model 7<br />

2.1 Hardware Set-up and Speci cations . . . . . . . . . . . . . . . . . . . 7<br />

2.2 Derivation of the Mathematical Model . . . . . . . . . . . . . . . . . 8<br />

2.3 Software Set-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13<br />

3 Overview of Control Strategies Implemented on the Flexible Beam<br />

System 15<br />

3.1 Linear Quadratic Regulation . . . . . . . . . . . . . . . . . . . . . . . 15<br />

3.2 Linear H1 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17<br />

3.3 Passivity-Based Control . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />

3.4 Backstepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21<br />

3.5 Successive Galerkin Approximations . . . . . . . . . . . . . . . . . . . 24<br />

3.5.1 The H 2 Problem . . . . . . . . . . . . . . . . . . . . . . . . . 24<br />

3.5.2 The H1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . 27<br />

vii


4 Simulation Results 29<br />

4.1 The Simulated Testbed . . . . . . . . . . . . . . . . . . . . . . . . . . 29<br />

4.2 Evaluation of the Plots in Simulation . . . . . . . . . . . . . . . . . . 31<br />

4.2.1 Linearized Optimal H 2 . . . . . . . . . . . . . . . . . . . . . . 31<br />

4.2.2 Linearized H1 . . . . . . . . . . . . . . . . . . . . . . . . . . 33<br />

4.2.3 Passivity Based Control . . . . . . . . . . . . . . . . . . . . . 33<br />

4.2.4 Backstepping Algorithm . . . . . . . . . . . . . . . . . . . . . 34<br />

4.2.5 SGA: Nonlinear H 2 Optimal Control . . . . . . . . . . . . . . 36<br />

4.2.6 SGA: Nonlinear H1 Control . . . . . . . . . . . . . . . . . . . 39<br />

4.3 Tabulated Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45<br />

4.4 Tuning and Ease of Implementation . . . . . . . . . . . . . . . . . . . 45<br />

4.5 Robustness Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 46<br />

5 Experimental Results 49<br />

5.1 Testbed and Open Loop Response . . . . . . . . . . . . . . . . . . . . 49<br />

5.2 Linearized Optimal and Robust Controls . . . . . . . . . . . . . . . . 49<br />

5.3 Passivity Based Control . . . . . . . . . . . . . . . . . . . . . . . . . 50<br />

5.4 Successive Galerkin Approximations . . . . . . . . . . . . . . . . . . . 52<br />

5.5 Tabulated Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59<br />

6 Conclusion and Future Work 61<br />

6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61<br />

6.2 Extensions to this Research . . . . . . . . . . . . . . . . . . . . . . . 62<br />

Bibliography 66


List of Tables<br />

4.1 Tabular Comparison of Simulated Results . . . . . . . . . . . . . . . 45<br />

4.2 Tabular Comparison of Simulated Robustness . . . . . . . . . . . . . 47<br />

5.1 Tabular Comparison of Experimental Results . . . . . . . . . . . . . . 59<br />

ix


List of Figures<br />

1.1 TORA System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2<br />

2.1 Photograph of Flexible Beam System . . . . . . . . . . . . . . . . . . 8<br />

2.2 Mechanical Model of Flexible Beam System, . . . . . . . . . . . . . . 9<br />

2.3 Torsional Spring Model of FBS . . . . . . . . . . . . . . . . . . . . . 10<br />

2.4 Translational Oscillation Model for FBS . . . . . . . . . . . . . . . . 11<br />

2.5 Simulink Diagram of FBS . . . . . . . . . . . . . . . . . . . . . . . . 14<br />

4.1 Initial Open Loop Disturbance . . . . . . . . . . . . . . . . . . . . . . 30<br />

4.2 Initial Open Loop Response . . . . . . . . . . . . . . . . . . . . . . . 30<br />

4.3 Linearized Optimal vs. Open Loop Response . . . . . . . . . . . . . . 32<br />

4.4 Linear H1 vs. Open Loop Response . . . . . . . . . . . . . . . . . . 33<br />

4.5 Linear H1 vs. Linearized Optimal . . . . . . . . . . . . . . . . . . . 34<br />

4.6 Passivity Based Control vs. Open Loop Response . . . . . . . . . . . 35<br />

4.7 Passivity Based Control vs. Linear Optimal Control . . . . . . . . . . 35<br />

4.8 Passivity Based Control vs. Linear Robust Control . . . . . . . . . . 36<br />

4.9 Backstepping vs. Open Loop Response . . . . . . . . . . . . . . . . . 37<br />

4.10 Backstepping vs. Linear Optimal Control . . . . . . . . . . . . . . . . 37<br />

4.11 Backstepping vs. Linear Robust Control . . . . . . . . . . . . . . . . 38<br />

4.12 Backstepping vs. Passivity Based Control . . . . . . . . . . . . . . . 38<br />

4.13 SGA: Nonlinear H 2 vs. Open Loop Response . . . . . . . . . . . . . . 39<br />

4.14 SGA: Nonlinear H 2 vs. Linear Optimal control . . . . . . . . . . . . . 40<br />

4.15 SGA: Nonlinear H 2 vs. Linear Robust Control . . . . . . . . . . . . . 40<br />

4.16 SGA: Nonlinear H 2 vs. Passivity Based Control . . . . . . . . . . . . 41<br />

4.17 SGA: Nonlinear H 2 vs. Backstepping Control . . . . . . . . . . . . . 41<br />

4.18 SGA: Nonlinear H1 vs. Open Loop Response . . . . . . . . . . . . . 42<br />

xi


4.19 SGA: Nonlinear H1 vs. Linear Optimal Control . . . . . . . . . . . . 42<br />

4.20 SGA: Nonlinear H1 vs. Linear Robust Control . . . . . . . . . . . . 43<br />

4.21 SGA: Nonlinear H1 vs. Passivity Based Control . . . . . . . . . . . . 43<br />

4.22 SGA: Nonlinear H1 vs. Backstepping Control . . . . . . . . . . . . . 44<br />

4.23 SGA: Nonlinear H1 vs. SGA: Nonlinear H 2 . . . . . . . . . . . . . . 44<br />

5.1 Open Loop Response of the FBS . . . . . . . . . . . . . . . . . . . . 50<br />

5.2 Linear Optimal vs. Open Loop . . . . . . . . . . . . . . . . . . . . . 51<br />

5.3 Linear Robust vs. Open Loop . . . . . . . . . . . . . . . . . . . . . . 51<br />

5.4 Linear Robust vs. Linear Optimal . . . . . . . . . . . . . . . . . . . . 52<br />

5.5 Passivity Based vs. Open Loop . . . . . . . . . . . . . . . . . . . . . 53<br />

5.6 Passivity Based vs. Linear Optimal . . . . . . . . . . . . . . . . . . . 53<br />

5.7 Passivity Based vs. Linear Robust . . . . . . . . . . . . . . . . . . . . 54<br />

5.8 SGA: Nonlinear H 2 vs. Open Loop . . . . . . . . . . . . . . . . . . . 54<br />

5.9 SGA: Nonlinear H 2 vs. Linear Optimal . . . . . . . . . . . . . . . . . 55<br />

5.10 SGA: Nonlinear H 2 vs. Linear Robust . . . . . . . . . . . . . . . . . 55<br />

5.11 SGA: Nonlinear H 2 vs. Passivity Based . . . . . . . . . . . . . . . . . 56<br />

5.12 SGA: Nonlinear H1 vs. Open Loop . . . . . . . . . . . . . . . . . . . 56<br />

5.13 SGA: Nonlinear H1 vs. Linear Optimal . . . . . . . . . . . . . . . . 57<br />

5.14 SGA: Nonlinear H1 vs. Linear Robust . . . . . . . . . . . . . . . . . 57<br />

5.15 SGA: Nonlinear H1 vs. Passivity Based . . . . . . . . . . . . . . . . 58<br />

5.16 SGA: Nonlinear H1 vs. SGA: Nonlinear H 2 . . . . . . . . . . . . . . 58<br />

xii


Chapter 1<br />

Introduction<br />

1.1 Motivation and Problem Description<br />

The reliability and e ectiveness of feedback control systems has been recog-<br />

nized for centuries: they provide a simple and robust way of regulating a large class<br />

of physical systems, and there exists a rich body of mathematical results that enable<br />

the systematic design of feedback control laws. A signi cant limitation of most of the<br />

classic results is that they apply to a rather narrow class of systems { those which can<br />

be modeled by systems of linear di erential equations. Despite the fact that many<br />

physical plants can be linearized, and thus an adequate small-signal control can be<br />

realized for the linearized system, the fact remains that almost all physical systems in-<br />

volve dynamics that cannot be modeled completely by a linear system. Such nonlinear<br />

systems have been the subject of much research in the past two decades researchers<br />

have sought to extend the results obtained for linear plants to non-linear plants.<br />

A good example of this is the search for the nonlinear optimal feedback con-<br />

troller. It is well known that the solution of the optimal linear control problem<br />

depends upon the solution of an algebraic Riccati equation. For the non-linear case,<br />

this result generalizes nicely: the optimal non-linear controller requires the solution of<br />

the Hamilton-Jacobi-Bellman (HJB) equation (which was actually discovered before<br />

the Riccati equations). This partial di erential equation is very di cult to solve,<br />

however, and the quest for a reliable and accurate approximation of its solution is an<br />

open problem. The Successive Galerkin Approximation (SGA) is one such method of<br />

approximation, and the details of its implementation will be described in Chapter 3.<br />

1


Figure 1.1: TORA System<br />

One of the challenges of evaluating the non-linear control design techniques<br />

that are being developed is to nd a reasonable method of gauging their performance.<br />

There exists a large variation in the complexity, behavior, and dynamics within the<br />

set of non-linear systems, and a given control design might work very well on one<br />

speci c type of system while providing poor performance on another. Thus the need<br />

for a benchmark non-linear problem { a mathematical system that contains signi cant<br />

non-linearities, has a tractable mathematical model, and is fairly easy to implement<br />

in hardware.<br />

In recent years, the translational oscillator rotational actuator (TORA) [1]<br />

system (also referred to as the Non-linear Benchmark Problem or NLBP) has been<br />

proposed as just such a benchmark system. In this system a mass is restricted to a<br />

linear oscillatory motion by ideal springs, and the system is actuated by arotational<br />

proof mass. The coupling between the rotational and linear motion provides the non-<br />

linearity. Many di erent researchers have used this system as a testbed to evaluate<br />

various non-linear control strategies, so this system was selected as a way to eval-<br />

uate and compare the performance of the successive Galerkin technique with other<br />

robust, non-linear control laws as well as with standard optimal linearized methods.<br />

Speci cally, six unique control laws are derived and implemented on a hardware im-<br />

plementation of the TORA system. A non-linear Galerkin approximation to the H 2<br />

2


problem and a Galerkin approximation to the non-linear H1 problem are studied and<br />

compared with the results of a linearized optimal control, a linearized H1 control, a<br />

passivity-based control, and a control law based on integrator backstepping. These<br />

control strategies are chosen because they are current topics of research, and most<br />

are proposed as control laws that will be provide robust performance.<br />

The speci c hardware system, upon which the tests were performed, is de-<br />

scribed in Chapter 2 along with a derivation of the equations of motion. Chapter 3<br />

presents an overview of the control strategies tested, and it explains the details of the<br />

implementations of each algorithm. Chapter 3 also explains the successive Galerkin<br />

approximation technique and shows how it is implemented on the hardware system.<br />

Chapter 4 explains the mechanics of the testbed and the methods of comparing per-<br />

formance, and a comparison of the six controls in simulation is presented. The actual<br />

results of the tests as performed in hardware are compared and discussed in Chap-<br />

ter 5, and an analysis of the robustness of the control laws is presented in Chapter<br />

6. Finally Chapter 7 sums up the results of the experiment the conclusions of the<br />

research are presented along with recommendations for future research.<br />

1.2 Contributions<br />

There is often a wide gap between mathematical systems and the physical<br />

plants these systems attempt to model. The TORA system is proposed in [2] as<br />

a benchmark problem so that nonlinear control algorithms can be compared on a<br />

common system { a system that can be easily implemented in hardware. While many<br />

control designs perform well in the idealized environment of mathematics, the true<br />

test of a nonlinear algorithm must lie in its regulation of a physical plant. The SGA<br />

technique is applied to the nonlinear benchmark problem, and it performs better than<br />

other well known nonlinear techniques when applied to a physical implementation of<br />

this TORA system. This result is the main contribution of this thesis: it o ers<br />

experimental evidence that the SGA method produces excellent results when applied<br />

to real-life problems and systems. This thesis also justi es further research into<br />

understanding, clarifying, and improving the SGA technique.<br />

3


1.3 Literature Review<br />

In surveying the technical literature relevant to this thesis, there are two pri-<br />

mary topics: research on the topic of the successive Galerkin approximation method,<br />

and research devoted to the topic of the NLBP system. Additionally a short review of<br />

the literature pertaining to the various control strategies used to highlight the SGA<br />

method will be presented.<br />

The SGA method was rst published in [3] as a method for iteratively improv-<br />

ing a nonlinear feedback control. This publication included a proof of the algorithm's<br />

convergence and the stability of the resulting control law at each iteration. Addi-<br />

tionally, each iteration brings the control design closer to solving the HJB equation,<br />

and thus the method of successive approximation eventually yields an approxima-<br />

tion of the nonlinear optimal solution with any desired accuracy, as the order of the<br />

Galerkin approximation increases and as the number of iterations goes to in nity.<br />

In [4] the algorithm was successfully applied to the inverted pendulum problem as<br />

well as several one-dimensional examples. In [5] the SGA technique was extended to<br />

the optimal robust nonlinear control problem. In the robust problem, a solution of the<br />

Hamilton-Jacobi-Isaacs equation is required with the minimum possible L 2 gain from<br />

disturbance to output. In [5] a proof is presented of the convergence and stability of<br />

the algorithm at each successive iteration, and su cient conditions for convergence<br />

and stability are presented for both the nonlinear optimal and robust algorithms.<br />

Additionally, [5] successfully applies the algorithm to a hydraulic actuation system,<br />

a missile autopilot design, and the control of an underwater vehicle system.<br />

The control of the TORA system was originally proposed as a model of a<br />

dual-spin spacecraft. The goal was to study how to circumvent the resonance capture<br />

e ect in [6], but in [7], [8] the system was studied to analyze the usefulness of a<br />

rotational actuator in damping out linear oscillations and vibrations. It was rst<br />

proposed as a benchmark problem for nonlinear control systems in [2]. A physical<br />

testbed is described consisting of a motor driven proof mass that was mounted on a<br />

linear air track. In [2], the three-fold objective of the Nonlinear Benchmark Problem<br />

is set forth: to stabilize the closed loop system, to exhibit good disturbance rejection<br />

4


compared to the uncontrolled oscillator, and to require limited control e ort. A<br />

passivity-based control design is also described that uses the control to simulate a<br />

damped pendulum absorber. The International Journal of Robust and Nonlinear<br />

Control published a special issue devoted exclusively to the NLBP (volume 8, 1998)<br />

and the 1995 American Control Conference contained an invited session in which six<br />

papers were submitted on the topic of the Nonlinear Benchmark Problem.<br />

The literature devoted to the TORA system was used as the basis for the<br />

designs presented in this thesis. In particular, the passivity-based control strategy<br />

comes mainly from the implementation described in [9], while a more complete ex-<br />

planation of passivity-based control of Euler-Lagrange systems can be found in [10]<br />

or in [11]. The integrator backstepping controller was mainly derived from [1] where<br />

they supply the necessary variable substitutions and transformations to derive acas-<br />

cade controller. It should be noted that the cascade controller presented in [1] is<br />

almost identical to the one derived in this thesis, and this controller is a special case<br />

of the full controller one obtains using integrator backstepping as described in [12].<br />

The linearized optimal H 2 controller was derived by the well known solution to the<br />

Riccati equation. A concise overview of the theory and application of optimal linear<br />

control can be found in [13]. The linearized H1 controller was also designed by the<br />

well known robust control technique as described in Chapter 6 of [14].<br />

5


Chapter 2<br />

Plant Speci cations and Model<br />

2.1 Hardware Set-up and Speci cations<br />

The various control strategies studied in this thesis are implemented on the<br />

same experimental testbed, which consists of a Flexible Beam System (FBS) that<br />

was purchased from Quanser Consulting Inc. A picture of the FBS upon which the<br />

control experiments were conducted is shown in Figure 2.1.<br />

It consists of a thin metal beam that is clamped at one end while free at the<br />

other. The free end is equipped with a voltage controlled DC motor that rotates a<br />

rigid beam structure this structure acts as a proof mass, and it's rotation is the only<br />

actuation mechanism in the system. The object of the control is to actuate this proof<br />

mass in such away that its motion will damp an initial vibration in the exible beam.<br />

The whole system consists of four parts:<br />

1) A exible beam,<br />

2) A proof mass structure consisting of two rigid beams and a cross beam,<br />

3) A DC motor and an encoder,<br />

4) A base plate instrumented with a strain gauge.<br />

The exible beam is 44 cm in length, while the rigid beams that form the proof<br />

mass are 28.5 cm in length. The rst mode sti ness of the system is experimentally<br />

derived by measuring the natural frequency, anditsvalue is approximately 30 N . The<br />

m<br />

mass of the cross bar that acted as the proof mass is .05 kg, with the inertia of the<br />

rigid beams connecting the cross bar to the motor is found to be :0039Kgm 2 . The DC<br />

motor has an external gear ratio of 70 to 1, an electrical resistance of 2.6 ohms, and a<br />

7


Figure 2.1: Photograph of Flexible Beam System<br />

torque constant of .001 Nm/Amp. The motor served as the only input to the system,<br />

however a strain gauge placed near the xed end of the exible beam senses the<br />

beam's de ection, and an encoder on the motor senses the angular position the proof<br />

mass. The strain gauge is calibrated to give 1 Volt per inch, and the encoder uses a<br />

1024 count disc which in quadrature results in a resolution of 4096 counts/revolution.<br />

The FBS can be mounted in two con gurations, with the exible beam vertically or<br />

horizontally mounted. We choose to mount the beam horizontally so that gravity will<br />

be acting perpendicular to the motion of the system, thus making stabilization more<br />

di cult.<br />

2.2 Derivation of the Mathematical Model<br />

Obtaining an adequate mathematical model for the FBS, one that was not too<br />

complex to be useful nor one that was too simple to be accurate, is a challenge. The<br />

rst step is representing the physical system in a simpler way. Figure 2.2 shows a<br />

simple mechanical model of the system. Here the rigid beam structure is modeled as a<br />

single rigid beam with a point mass (m) located at one end and a motor at the other.<br />

8


Figure 2.2: Mechanical Model of Flexible Beam System,<br />

The exible beam is treated as capable only of motion in the plane of the diagram.<br />

If perturbed the free end of the beam will oscillate according to the classical beam<br />

equation:<br />

A(x) @2 w(x t)<br />

@t 2<br />

+ @2<br />

@x 2 [EI(x)@2 w(x t)<br />

@x 2 ]=f(x t) (2.1)<br />

where f(x t) is the applied force on the beam in the y direction,and w(x t) is the<br />

de ection of a point a distance x from the xed end at time = t. Unfortunately, the<br />

partial derivatives in this equation are not suitable for the state-space model that we<br />

seek, something of the form:<br />

where x 2 IR n .<br />

_x = f(x)+g(x)u(x) (2.2)<br />

A simpler idea would be to model the exible beam as a rigid beam with a<br />

single torsional spring at the xed end (see Figure 2.3). A state-space equation can<br />

be obtained for this model using the Euler-Lagrange (EL) equations:<br />

d<br />

dt (@Lp(qp qp) _<br />

) ;<br />

@qp _<br />

@Lp(qp qp) _<br />

= Mup<br />

@qp<br />

9<br />

(2.3)


Figure 2.3: Torsional Spring Model of FBS<br />

Here, Lp is the Lagrangian (the kinetic energy minus the potential energy) and Mup<br />

is avector representing the external forces applied to the system.<br />

Using these equations, we obtain a model that is unwieldy and di cult to use<br />

in designing control laws, therefore an even simpler model is desired. If one assumes<br />

that the free end of the exible beam undergoes only small de ections, then it can<br />

be assumed to follow a linear path, and its de ection can be measured as a length<br />

rather than as an angle. We can then model the bending dynamics of the beam as<br />

a simple linear spring with spring constant k. The system can then be thought of<br />

as a rotational actuator that either induces or attenuates translational oscillations.<br />

Figure 2.4 shows a simple diagram of this concept. While this model ignores all of the<br />

higher order modes and dynamics of the beam as well as the nonlinear motion of the<br />

free end, it does yield a su ciently simple mathematical representation. The linear<br />

displacement of the motor xture at the free end of the exible beam is represented<br />

by y, is the angle made by the rigid proof mass, k is the equivalent linear sti ness of<br />

the exible beam structure, m is the mass at the end of the proof mass, M is the total<br />

mass of the motor xture, c is the length of the rigid proof mass, Rm is the motor's<br />

10


Figure 2.4: Translational Oscillation Model for FBS<br />

electrical resistance, and I is the inertia of the proof mass. Letting x =(y _y _ ) T ,<br />

and using the voltage applied to the motor as the input to the system, the resulting<br />

state space equations are as follows:<br />

0 1 0<br />

_y<br />

B C B<br />

B C B<br />

ByC<br />

B<br />

B C<br />

B _C<br />

= B<br />

C B<br />

@ A @<br />

_y<br />

_2 Rm sin( )(m 2 c 3 +Imc)+mc cos( ) _ K 2 m K 2 g;ykRm(mc 2 +I)<br />

Rm[(I+mc 2 )(M +m);m 2 c 2 cos 2 ( )]<br />

_<br />

;m2c2Rm cos( )sin( ) _2 +yRmkmc cos( );y(m+M )Rmk<br />

Rm[(I+mc2 )(M +m);m2c2 cos2 ( )]<br />

0<br />

B<br />

+ B<br />

@<br />

1<br />

C<br />

A<br />

0<br />

;mcKmKg cos( )<br />

Rm[(I+mc2 )(M +m);m2c2 cos2 ( )]<br />

0<br />

KmKg(m+M )<br />

Rm[(I+mc 2 )(M +m);m 2 c 2 cos 2 ( )]<br />

1<br />

C u(x) (2.4)<br />

C<br />

A<br />

Again, this state-space model was derived using the EL equations. One mightwonder<br />

at the validity of the simplifying assumptions necessary to derive this model. For the<br />

purpose of this thesis, modeling inaccuracy is highly desirable { the control strategies<br />

studied in this paper are purportedly \robust" in some sense. One would hope that<br />

11


a robust design strategy would, in practice, stabilize a physical plant that di ers<br />

substantially from its mathematical model. Thus the crudeness of our model only<br />

helps to demonstrate the robustness of the control laws that succeed in yielding<br />

adequate performance on the actual hardware system. Another advantage of this<br />

mechanical model is that it is identical, in form, to the system equations presented<br />

in [2] as a nonlinear benchmark problem. Since the object of this research is to<br />

compare nonlinear robust control strategies, this model is ideal in the sense that it<br />

is already widely used as a standard mathematical system for testing and comparing<br />

nonlinear control techniques. The mathematical model in (2.4) is slightly di erent<br />

than the standard model of the NLBP in that our input is voltage, not torque. This<br />

adds one extra term and a few constants in order to incorporate the voltage to torque<br />

transfer function into the model. Many of the papers involving the NLBP express the<br />

system equation in a non-dimensionalized form. Since some of the control strategies in<br />

this comparison make use of these dimensionless equations, and because they simplify<br />

the model by combining all of the physical parameters into a single value, we include<br />

them here:<br />

0 1<br />

_z<br />

B C<br />

B C<br />

BzC<br />

B C<br />

B _ C<br />

@ A<br />

=<br />

Where _ represents d<br />

z = y<br />

= t<br />

=<br />

d q<br />

M +m<br />

I+mc<br />

q<br />

k<br />

M +m<br />

0<br />

B<br />

@<br />

v(x) =u(x) m+M<br />

k(I+mc2 )<br />

mc = p<br />

(I+mc2 )(m+M )<br />

_z<br />

;z+ _2 sin( ); 2 cos( )_z K2 m K2 g<br />

Rmmc<br />

1; 2 cos( ) 2<br />

_<br />

cos( )[z; 2 sin( )]+ K2 m K2 g<br />

Rmmc<br />

1; 2 cos( ) 2<br />

1<br />

C<br />

A<br />

+<br />

0<br />

B<br />

@<br />

0<br />

KmKg<br />

Rm<br />

cos( )<br />

1; 2 cos( ) 2<br />

0<br />

KmKg<br />

Rm<br />

1; 2 cos( ) 2<br />

and the following transformations are used:<br />

12<br />

1<br />

C v(x): (2.5)<br />

C<br />

A


2.3 Software Set-up<br />

All of the control strategies presented in this paper were simulated in MAT-<br />

LAB's SIMULINK environment. They were then implemented by way of Quanser<br />

Consulting's Multiq3 I/O board which interfaced with the SIMULINK Real Time<br />

Workshop. Figure 2.5 shows the SIMULINK model used in the simulations and in<br />

the actual exible beam system. Their are two main blocks, one is the control block<br />

that takes the four states as inputs, as well as an initial open loop disturbance. The<br />

control has only one output, a voltage signal that is fed into the system block. The<br />

output of the strain gauge and the motor encoder were run through the Multiq3<br />

board into SIMULINK. These provided a direct state measurement of the angle, ,<br />

and the strain was a good estimator for the linear displacement, y. The state ve-<br />

locities in the system equations were numerically computed using lters of the form<br />

s<br />

s+<br />

. Although this theoretically makes for a relatively crude observer, in practice<br />

these pseudo derivatives actually performed very well. Even though the assumption<br />

of full-state feedback is not strictly true, in practice a more complicated observer<br />

is unnecessary to achieve good performance by the control laws. The sample rate<br />

for in the SIMULINK model for both the simulation and the FBS was 1 kHz. All<br />

of the physical parameters for the Quanser system are noted here: k = 30N=m,<br />

m = :05Kg, M = :6Kg, c = :285m, I = :0030Kgm 2 , Km = :001Nm=A, Kg = 70,<br />

Rm =2:6 Ohms.<br />

13


printed 09−Mar−2000 09:04<br />

page 1/1<br />

/tmp_mnt/auto/ee/willy/work/simbeam/lqr2a.mdl<br />

Integrate x1^3<br />

Display1<br />

Control<br />

Sequence<br />

In2 Out1<br />

select OL/CL<br />

In1<br />

0<br />

Open loop<br />

2*pi*fnat*2<br />

s+2*pi*fnat*2<br />

2<br />

Limit<br />

thetadot<br />

Filter<br />

Disturbance<br />

U<br />

Disturbance<br />

Sequence<br />

defdot<br />

theta<br />

Figure 2.5: Simulink Diagram of FBS<br />

14<br />

def<br />

Integrate x1^2<br />

Display<br />

LQR Controller<br />

In2 Out1<br />

In1<br />

0<br />

Smart System /curtis<br />

rad −−> deg<br />

theta<br />

theta<br />

−K−<br />

thetadot<br />

Volts<br />

lqr.mat<br />

defdot<br />

input<br />

m −−> cm<br />

position<br />

To File1<br />

def<br />

100<br />

lqr2a


Chapter 3<br />

Overview of Control Strategies Implemented on the Flexible<br />

Beam System<br />

This Chapter will provide a review of all of the control strategies that were<br />

tested on the NLBP. The focus of this thesis is the comparison of the SGA method<br />

of control with a broad sample of current robust nonlinear control methodologies.<br />

Thus the following nonlinear design methodologies are included mainly to highlight<br />

the uniqueness of the SGA design method. This chapter will explain the mathemat-<br />

ical derivation of two linearized controls, a passivity based approach, abackstepping<br />

controller, and the SGA algorithm. Additionally, the details of the implementation<br />

of these controllers will be explained.<br />

3.1 Linear Quadratic Regulation<br />

As a starting point of comparison, an H 2 optimal control was rst designed<br />

for the linearized system. With optimal control, the objective is to minimize a cost<br />

functional of the following form:<br />

V (x 0)=<br />

Z 1<br />

(x<br />

0<br />

T (t)Qx(t)+u T Ru)dt (3.1)<br />

where V (x 0) represents the cost associated with moving from a given state, x 0, under<br />

a control signal u to the origin. Q and R are matrices that weight the cost of the<br />

states and the control respectively. The goal is to nd the stabilizing control signal<br />

u (x) that will minimize the cost V (x). Linear quadratic regulation has a well known<br />

solution, where the optimal stabilizing control u (x) is a simple linear combination of<br />

the states: u (x) =;Kx. K is a matrix that depends upon the input matrix B, the<br />

15


weighting matrix R, and P : K = R ;1 B T P , where P is the solution to the following<br />

algebraic Riccati equation:<br />

0=A T P + PA+ Q ; PBR ;1 B T P (3.2)<br />

As the name implies, a linearization of the system equations is necessary to implement<br />

this control strategy. The linearized equations for system (2.4) were derived as follows:<br />

_~x = rF (x)jx0 ~x + rG(x)jx0 ~u = A~x + B~u (3.3)<br />

where F (x) and G(x) are as de ned in (2.4). A and B are calculated to be:<br />

0<br />

0<br />

B (mc<br />

B<br />

A = B<br />

@<br />

1 0 0<br />

2 +I)k<br />

Mmc2 +I(m+M ) 0 0<br />

mcK2 mK2 g<br />

Rm(Mmc2 1 0<br />

0 0 0<br />

0<br />

C B<br />

C B ;mcKmKg<br />

C B<br />

+I(m+M )) C B = B Rm(Mmc<br />

B<br />

1 C B<br />

A @<br />

0 0<br />

2 1<br />

C<br />

+I(m+M )) C<br />

0 C<br />

A<br />

mck<br />

Mmc 2 +I(m+M )<br />

;K2 mK2 g (m+M )<br />

Rm(Mmc2 +I(m+M ))<br />

(m+M )KmKg<br />

Rm(Mmc 2 +I(m+M ))<br />

(3.4)<br />

After the constants are substituted, the linearized model becomes:<br />

0<br />

1 0 1<br />

0<br />

B<br />

B;48<br />

_x = B 0<br />

@<br />

1<br />

0<br />

0<br />

0<br />

0<br />

0<br />

0<br />

:44<br />

1<br />

0<br />

C B C<br />

C B C<br />

C B;:69C<br />

C x + B C<br />

B C u(x):<br />

C B 0 C<br />

A @ A<br />

(3.5)<br />

86 0 0 ;19:9 31:7<br />

Next, simulations were run to determine which matrices Q and R would result in a<br />

strongly damping control while not running the actuator past it's physical limitations.<br />

Once suitable weighting matrices were obtained, the control was implemented on the<br />

hardware, and the weights were tuned again to give good performance on the physical<br />

plant. Tuning is always a subjective endeavor, and there is no guarantee that the<br />

values we found were, in fact, the absolute best choices, nevertheless the parameters<br />

that we found to give the best performance were the following:<br />

0<br />

1<br />

2000<br />

B 0<br />

Q = B 0<br />

@<br />

0<br />

100<br />

0<br />

0<br />

0<br />

70<br />

0<br />

C<br />

0 C<br />

0C<br />

A<br />

0 0 0 0<br />

R = :01:<br />

16


The optimal gain matrix was given by K =(;658 69:8 26 3:7). Although the LQR<br />

design is simple to derive and easy to implement, it is subject to several weaknesses.<br />

First and foremost, this control strategy is optimized for a mathematical system<br />

quite di erent from the actual hardware. Note that the original model itself is a<br />

large simpli cation of the physical plant, and now even this model has been further<br />

distorted through the linearization process. Hence, the errors in modeling should be<br />

compounded, whereas all of the unmodelled and non-linear dynamics will be ignored<br />

by the LQR design. Another possible limitation of LQR control design is the sim-<br />

plicity of its mechanism: the control signal is limited to be just a linear combination<br />

of the states. Other controls studied in this paper will implement feedback controls<br />

generated by more sophisticated functions of the plant's outputs.<br />

3.2 Linear H1 Control<br />

The objective of a linear H1 controller is to nd a feedback gain u = ;Kx such<br />

that the L 2 gain from an exogenous disturbance signal, w(x), to an output signal,<br />

z(x), is minimized i.e. we must nd a stabilizing u(x) that achieves the smallest<br />

possible gain, :<br />

Z T<br />

0<br />

z(t) T z(t)dt = 2<br />

Z T<br />

0<br />

w(t) T w(t)dt (3.6)<br />

This approach attempts to nd a robust control that will yield adequate performance<br />

even with a worst case disturbance signal, and it is hoped that such a control will then<br />

be robust with respect to the various modeling errors and unmodelled dynamics of<br />

the physical system studied. For our system, the disturbance signal w(x) ismodeled<br />

as the apparent linear force acting on the free end of the exible beam:<br />

0<br />

0<br />

B<br />

KmKg(M +m)<br />

B<br />

_x = f(x)+g(x)u(x)+ B Rm[(I+mc<br />

B<br />

@<br />

2 )(M +m);m2c2 cos2 1<br />

C ( )] C w(x):<br />

0<br />

C<br />

A<br />

(3.7)<br />

;mcKmKg cos( )KmKg<br />

Rm[(I+mc 2 )(M +m);m 2 c 2 cos 2 ( )]<br />

and the output signal z(x) is de ned as a quadratic weighting of the states plus the<br />

control energy. These de nitions are reasonable since the unmodelled and truncated<br />

17


higher order dynamics of the beam should manifest themselves as forces acting on<br />

the beam, while the desired output is the regulation of the states subject to limited<br />

actuator power. Thus the linearized H1 control should, even under worst-case e ects<br />

from modeling and linearization errors, generate an adequate, stabilizing control. Like<br />

the LQR design strategy, linearH1 is a solved problem, the solution of which can be<br />

obtained by solving the following modi ed algebraic Riccati equation:<br />

0=PA+ A T P ; P (B 2B T<br />

2 ; ;2 B 1B T<br />

1 )P + C T C: (3.8)<br />

The valid solutions are ones in which P 0 and are suchthatA;(B 2B 0 2; 2 B 1B 0 1)P is<br />

asymptotically stable. Asearch is then performed to nd the smallest possible that<br />

satis es these conditions, and the resulting solution (a function of this optimal ) is<br />

guaranteed to minimize the L 2 gain from disturbance to output while simultaneously<br />

stabilizing the system. The optimal feedback gain matrix K is next computed from<br />

this solution as follows: K = ;B 0 2P . In these equations, B 1 is the matrix resulting<br />

from a linearization of k(x), where<br />

0<br />

B<br />

k(x) = B<br />

@<br />

0<br />

KmKg(M +m)<br />

Rm[(I+mc2 )(M +m);m2c2 cos2 ( )]<br />

0<br />

;mcKmKg cos( )KmKg<br />

Rm[(I+mc 2 )(M +m);m 2 c 2 cos 2 ( )]<br />

1<br />

C (3.9)<br />

C<br />

A<br />

and B 2 is the linearization of g(x) about the equilibrium point x =0. For the exible<br />

beam system, B2 was found to be:<br />

0 1<br />

0<br />

B C<br />

B C<br />

B;:69C<br />

B2 = B C<br />

B C : (3.10)<br />

B 0 C<br />

@ A<br />

31:7<br />

The parameters used to compute this control were Q = 2000 100 70 0 and<br />

R = :01. The optimal gain matrix was computed to be K = ;62 17 1:5 :46<br />

and the optimal gamma was was found to be = 302.<br />

18


3.3 Passivity-Based Control<br />

Certain physical systems can be pro tably studied by examining the prop-<br />

erties of their potential energy functions, and by noticing the natural damping and<br />

dissipation present. Passivity-Based Control (PBC) algorithms have yielded simple,<br />

yet robust control laws to a variety of such systems by shaping the energy and dis-<br />

sipation functions. The exible beam system is an example of an under-actuated<br />

Euler-Lagrange system, a system that is completely modeled by the EL equations.<br />

The stability of the equilibrium states of EL systems is solely dependent on their<br />

potential energy functions, and if enough damping is present these equilibria will be<br />

asymptotically stable. Passivity-based control designs seek to exploit these facts to<br />

robustly stabilize complicated nonlinear systems. In particular, the potential energy<br />

function of the closed-loop system is shaped by the controller, and suitable damping<br />

is injected into the system via the control law to achieve the desired stability and<br />

performance objectives. There are several fundamental properties of EL systems that<br />

make this energy-shaping and damping injection practical. First, EL systems are<br />

completely described by their EL parameters: TpVpFpM where the EL equation is<br />

written as follows:<br />

d<br />

dt (@Lp(qp qp) _<br />

) ;<br />

@qp _<br />

@Lp(qp qp) _<br />

@qp<br />

= Mup ; @Fp( qp) _<br />

: (3.11)<br />

@ _<br />

In this equation Lp(qp qp) _ = Tp(qp qp) _ ; Vp(qp qp), _ where Tp is the plant's kinetic<br />

energy, Vp is the plant's potential energy, Fp( qp) _ is the Rayleigh dissipation function<br />

for the plant, and M is a vector of ones and zeros that indicates which of the states<br />

are directly actuated by the plant inputs, up. We can now construct a controller as<br />

another EL system as follows:<br />

d<br />

dt (@Tc(qc qc) _<br />

) ;<br />

@qc _<br />

@Tc(qc qc) _<br />

+<br />

@qc<br />

@Vc(qcqp)<br />

+<br />

@qc<br />

@Fp( qp) _<br />

=0: (3.12)<br />

@qp _<br />

Here, the potential energy function of the control block, Vc, is a function of the plant<br />

states qp and the control state qc, but its not a function of the generalized control<br />

velocity qc. _ This equation can be combined with the EL equation for the plant into<br />

19<br />

qp


a total closed-loop expression, provided that the plant input, up, is de ned according<br />

to the following interconnection constraint:<br />

up = ; @Vc(qcqp)<br />

: (3.13)<br />

@qp<br />

Thus, the second and most important factabout EL systems is that the closed-loop<br />

system is itself an EL system, and the closed-loop EL parameters (TclVclFcl) are<br />

simply the sum of the parameters of the plant and the control. In other words,<br />

we can shape the energy and dissipation of the closed loop system as desired by<br />

choosing the dynamics of the control in the correct manner: it must comply with<br />

the aforementioned interconnection constraint. In [10] this idea is further re ned<br />

by showing that the injected dissipation function in the control need not necessarily<br />

be a function of the derivative of the plant variables. Rather, a dynamic system in<br />

the control is su cient, under certain passivity conditions on the original plant, to<br />

guarantee closed loop asymptotic stability. In addition, Ortega et.al. [10] add the<br />

constraint thatthere is limited actuator power available, or in other words<br />

up umax: (3.14)<br />

Following the design outlined in [15], a suitable Rayleigh dissipation function Fc was<br />

chosen:<br />

Fc = 1<br />

2ab _<br />

qc<br />

(3.15)<br />

where a and b are parameters that scale the injected damping. Also, the potential<br />

energy function of the control was constructed so as to ensure the stability of the zero<br />

equilibrium point. A su cient condition is that Vcl = Vp + Vc have a global minimum<br />

at zero. Since the potential energy of the plant is a quadratic function of the states,<br />

we chose Vc as<br />

Vc(q3qc) = 1<br />

Z Z qc+bq3<br />

q3<br />

k1 tanh(s)ds + k2 tanh(s)ds: (3.16)<br />

b 0<br />

0<br />

so that it was the sum of two strictly increasing integrals that are minimized at the<br />

zero state. Here tanh(s) was chosen as a saturation function to model the actuator<br />

20


limitations, k 1 and k 2 are tuning parameters to properly shape the closed loop en-<br />

ergy function, and q 3 is the actuated plant variable, . The controller dynamics are<br />

computed from the interconnection constraint andthe controller's EL equation:<br />

u = ;k 2 tanh(qc + b ) ; k 1 tanh( )<br />

qc _ = ;ak2 tanh(qc + b ):<br />

(3.17)<br />

Thus the control takes the measurable output, , as its only input, and the dynamics<br />

in the control produce a pseudo-derivative of to generate the necessary damping.<br />

The mathematical model for the exible beam system was slightly di erent from the<br />

nonlinear benchmark problem due to the extra term that resulted from the fact that<br />

we used voltage as an input instead of torque. This made the implementation of the<br />

saturation constraint a little tricky. In fact, the particular software/hardware con g-<br />

uration of the exible beam system required the removal of the saturation functions<br />

in the controller dynamics. (The actual control signal was up = ;k 2(qc + b ) ; k 1 .)<br />

To implement the control was simple: nd appropriate values for a b k 1 and k 2.<br />

The constants a = 250, b = 2:7, k 1 = :29 and k 2 = 2:5 were chosen experimentally<br />

to give the best possible dynamic performance. This control was relatively easy to<br />

implement, and its great advantage is that it only uses measurable outputs of the<br />

system. This strength can also be seen as a weakness, though the controller does<br />

not utilize all of the available information { only one of the two truly measurable<br />

states is used to generate the control signal. From an informational stand-point, the<br />

ideal controller should use all of the available plant output signals. However, the<br />

passivity-based control can clearly do more than a feedback system consisting of a<br />

simple gain matrix { PBC systems implement a dynamic system in the feedback loop<br />

that, in our case, provides information about _ as well as .<br />

3.4 Backstepping<br />

Another recent approach to the problem of robustly stabilizing non-linear sys-<br />

tems is that of integrator backstepping. This control strategy is an attempt to con-<br />

struct for a given non-linear system a control that will render the closed-loop system's<br />

21


control Lyapunov function globally asymptotically stable. Backstepping essentially<br />

tries to reduce the system to a number of subsystems in series. Then the stabilizing<br />

inputs for the subsystem closest to the output are computed, and these inputs are<br />

then treated as the outputs of the previous system, and a new set of inputs to this<br />

next subsystem are computed to guarantee stability. This process is repeated until<br />

the original control input is computed. Control laws generated by this methodol-<br />

ogy, as shown in [1], are proven to stabilize the system globally and asymptotically.<br />

However, this design procedure is the most di cult to compute and implement, and<br />

the resulting control law is both complicated and cumbersome. The rst step is<br />

to apply a variable transformation to the exible beam's non-dimensionalized state<br />

equations (2.5) to obtain:<br />

z 1 = x 1 + sin( )<br />

z 2 = x 2 + _ cos( )<br />

y 1 = x 3<br />

y 2 = x 4<br />

v =<br />

1<br />

1 ; 2 cos 2 ( )<br />

cos( )(z1 ; (1 + _2 ) sin( )) + k2 mk2 g<br />

Rmmc (z2 ; _ cos( )) + kmkgu<br />

R ; m<br />

(3.18)<br />

This variable substitution and the feedback transformation from u to v (which is<br />

a non-singular transformation because the parameter is always smaller than 1)<br />

simpli ed the system equations to the following form:<br />

z_ 1 = z2 z_ 2 = ;z1 + sin( )<br />

y_ 3 = _<br />

y_ 4 = = v<br />

22<br />

(3.19)


Now we regard as the control variable and construct a control law y 1( ) that<br />

will make the translational coordinates (z 1 and z 2) globally asymptotically stable.<br />

Jankovic et.al. [1] choose y 1( )=; arctan(c 0z 2). However, y 1 is not the control vari-<br />

able, and it follows its own dynamic equations. Therefore we de ne new angular<br />

variables to implement the desired trajectories of and _ : 1 = + arctan(c 0z 2) and<br />

2 = _ 1. Substituting into the previous system equation yields the following modi ed<br />

system equation:<br />

z_ 1 = z2 z_ 2 = ;z1 + sin( 1 ; arctan(c0z2)) _ 1 = 2<br />

_ 2 = w<br />

w = v ; 2c3 0z2 1+c2 0z2 (;z1 + sin( ))<br />

2<br />

2 +<br />

c 0<br />

1+c 2<br />

0z2 2<br />

(;z 2 + _ cos( )):<br />

(3.20)<br />

Now all that is required is to stabilize the -subsystem which can be done with the<br />

simple feedback w = ;K . By following the approach outlined in [1], but with the<br />

system modi ed for voltage as the input instead of torque, the following control law<br />

is obtained:<br />

u = k(I + mc2 )<br />

m + M<br />

= ;k1(y1 + arctan(c0z2)) ; k _<br />

k2c0(;z1 + sin( )<br />

2 ;<br />

1+c2 0z2 2<br />

+ 2c0z2(;z1 + sin( )) 2<br />

(1 + c2 0z2 2) 2 + c0(;z2 + _ cos( )<br />

1+c2 0z2 2<br />

mc<br />

= p<br />

2 (I + mc )(m + M)<br />

(3.21)<br />

The primary di culty with implementing this control is understanding the purposes<br />

of the various transformations and substitutions. Additionally, this last expression for<br />

the control input is quite complicated { this makes it di cult to assign physical mean-<br />

ing to the control parameters, c 0, k 1, k 2. The parameters used in the commissioning<br />

of this design are as follows: c 0 =1,k 1 = 10, and k 2 =1.<br />

23


3.5 Successive Galerkin Approximations<br />

The primary purpose for implementing the four previous designs was to pro-<br />

vide a reasonably complete set of control laws with which to compare the successive<br />

Galerkin approximation algorithm as developed in [5]. This control strategy seeks to<br />

solve the non-linear H 2 and H1 problems by approximating the solutions to their<br />

associated Hamilton-Jacobi equations. These equations are non-linear partial dif-<br />

ferential equations that are impossible to solve analytically in the general case. To<br />

accomplish the approximation the Hamilton-Jacobi equations are rst reduced to an<br />

in nite sequence of linear partial di erential equations, named generalized Hamilton-<br />

Jacobi equations. Second, Galerkin's method is used to approximate the solutions<br />

of these linear equations, and the combination of these two steps yields a control<br />

algorithm that converges to the optimal solution as the order of the approximation<br />

and the number of iterations goes to in nity.<br />

3.5.1 The H 2 Problem<br />

The SGA technique was rst applied to the non-linear optimal H 2 problem,<br />

where the goal is to minimize a cost functional V with respect to some u (x):<br />

_x = f(x)+g(x)u<br />

Z<br />

V (x) =min l(x)+kuk<br />

u<br />

2<br />

R dt<br />

where l(x) is some cost function that depends on the state x, and R is the matrix<br />

that weights the cost of the control. Throughout this development we shall assume<br />

that f(0) = 0, that l(x) is a positive de nite function, and that f(x) is observable<br />

through l(x). The solution to this minimization problem is given by the full-state<br />

feedback control law<br />

u (x) =; 1<br />

2 R;1 T @V<br />

g<br />

@x<br />

24<br />

(3.22)


where V satis es the well known Hamilton-Jacobi-Bellman (HJB) equation, written<br />

as follows:<br />

@V T<br />

@x<br />

1 @V<br />

f + l ;<br />

4 @x gR;1 T<br />

T @V<br />

g<br />

@x<br />

=0: (3.23)<br />

To implement the rst step of the SGA algorithm we write the HJB equation as<br />

@V T<br />

@x<br />

(f + gu)+l + kuk2 R =0 (3.24)<br />

u (x) =; 1<br />

2 R;1 T @V<br />

g<br />

@x<br />

: (3.25)<br />

Equation (3.24), a linear partial di erential equation, is the Generalized Hamilton-<br />

Jacobi Bellman equation (GHJB). The usefulness of writing the HJB equation in<br />

this form is that now V and u are decoupled. Assuming that we start with some<br />

stabilizing control, u (0) (x), we can perform an in nite sequence of iterations to nd<br />

the optimal control, u (x):<br />

@V (i)T<br />

@x (f + gu(i) )+l(x)+<br />

(i)<br />

u<br />

2<br />

=0 R (3.26)<br />

u (i+1) (x) =; 1<br />

2 R;1g T (i) @V<br />

(x) (x)<br />

@x<br />

(3.27)<br />

where i ranges from 0 to 1. If u (0) (x) asymptotically stabilizes the system on IR n ,<br />

then Equations (3.26) and (3.27) describe a sequence of iterations, which was shown<br />

in [16] to converge to the solution of the HJB equation point wiseon . Thus instead<br />

of computing V and u simultaneously as in the HJB equation, we compute them<br />

iteratively. The only problem left is to solve the GHJB equation at each step of the<br />

iteration.<br />

The solution V (i) of Equation (3.26) on can be approximated via a global<br />

Galerkin approximation scheme as follows. Let V (i)<br />

N (x) = PN j=1 c(i)<br />

j<br />

j(x), where the<br />

set f j(x)g 1 j=1 is a complete basis for L 2( ) and j(0) = 0. The coe cients c (i)<br />

j are<br />

found by solving the algebraic Galerkin equation<br />

Z<br />

@V (i)T<br />

@x (f + gu(i) )+l +<br />

(i)<br />

u<br />

2<br />

R<br />

k =1:::N.<br />

25<br />

k dx =0


To apply this algorithm to the exible beam system, we made use of a Matlab<br />

implementation of this algorithm contained in the le hjb.m (see Appendix B). It is<br />

a straightforward numerical implementation of the preceding algorithm, and it only<br />

required three items: the set , the basis elements f jg N<br />

j=1, and the initial stabiliz-<br />

ing control u (0) . In the hardware implementation we used the following initializing<br />

parameters:<br />

=[;:2:2] [;2:5 2:5] [; 2 2 ] [;20 20]<br />

f jg = fx 2<br />

1x 2<br />

2x 2<br />

3x 2<br />

4x 1x 2x 1x 3x 1x 4x 2x 3x 2x 4x 3x 4g<br />

u (0) (x) =41x 1 ; 1:5x 2 ; 2:6x 3 ; :19x 4:<br />

We added higher order terms in simulation, though they only slightly improved the<br />

performance:<br />

=[;:05:05] [;5 5] [; 2 2 ] [;10 10]<br />

f jg = fx 2<br />

1x 2<br />

2x 2<br />

3x 2<br />

4x 1x 2x 1x 3x 1x 4x 2x 3x 2x 4x 3x 4x 3<br />

1x 2x 1x 3<br />

2x 3<br />

1x 3x 3<br />

2x 3x 1x 3<br />

3<br />

x 2x 3<br />

3x 3<br />

1x 4x 3<br />

2x 4x 3<br />

3x 4x 1x 3<br />

4x 2x 3<br />

4x 3x 3<br />

4g<br />

u (0) (x) = 120x 1 ; 25x 2 ; 4:5x 3 ; :6x 4:<br />

In other words, we made the initial stabilizing control the linearized LQR control<br />

developed previously. was constructed so that the control would be stabilizing for<br />

displacements of up to 20 cm in either direction and for rotations of up to 90 degrees<br />

by the proof mass, in the hardware. The basis functions were simply a set of second<br />

order polynomials and their corresponding cross terms, and in the simulation fourth<br />

order terms were added as basis functions. This control has several bene cial qualities.<br />

First, it uses all of the available outputs to generate a truly non-linear control signal {<br />

u(x) can depend on the square of the states, and the g T (x) term renders even a second<br />

order SGA control signal nonlinear. Second, the SGA algorithm is approximating an<br />

optimal solution: the designer can feel assured that the control approaches optimality<br />

with respect to the desired cost functional. Third, the SGA algorithm is easy to tune<br />

26


ecause one can quickly adjust the Q and R matrices to change the state penalty<br />

weightings. Finally, this design technique can be interpreted as improving upon the<br />

initial stabilizing control (developed through some sub-optimal approach, such as a<br />

backstepping or PBC strategy) at each iteration of the algorithm.<br />

3.5.2 The H1 Problem<br />

The successive Galerkin approximation technique for the Hamilton-Jacobi-<br />

Isaacs (HJI) equation is described in [5]. The basic idea is similar to the previous<br />

section except that the successive iteration step requires two nested loops. The non-<br />

linear H1 problem data is given by<br />

_x = f(x)+g(x)u + k(x)w<br />

Z T<br />

0<br />

l(x)+kuk 2<br />

R dt<br />

x(0) = 0 8 T 0<br />

2<br />

Z T<br />

0<br />

kwk 2<br />

P dt<br />

where it is also desirable to compute the smallest possible > 0. In other words, the<br />

objective is to minimize the L 2 gain from an exogenous disturbance signal, w, to an<br />

output de ned by R l(x)+kuk 2<br />

R dt. The solution to this minimization problem is the<br />

HJI equations, given by<br />

@V T<br />

@x f + hT h + 1<br />

4<br />

@V<br />

@x<br />

1<br />

(<br />

2 2 kP ;1 k T ; gR ;1 g T T<br />

@V<br />

)<br />

@x<br />

=0: (3.28)<br />

In a manner directly analogous to the previous section we can write the HJI equation<br />

in a way that decouples u, w, and V , and then we can solve for the optimal u<br />

iteratively: we start, as before, with an initial stabilizing control u (0) , and then iterate<br />

between the disturbance and V untilitistheworst disturbance for the given control.<br />

Then we update the control u (1) and iteratively compute the worst disturbance for<br />

27


the new control, then update the control again and so on. Thus we actually perform<br />

two simultaneous iterations, one for w (x) and one for u (x):<br />

@V (ij)T<br />

(f + gu<br />

@x<br />

(i) + kw (ij) )<br />

(i)<br />

2<br />

+ l + u<br />

R ; 2 (ij) 2<br />

w<br />

P<br />

=0 (3.29)<br />

w (ij+1) (x) = 1<br />

2 2 P ;1 k T (ij)<br />

@V<br />

(x) (x) (3.30)<br />

@x<br />

u (i+1) (x) =; 1<br />

2 R;1g T (i1)<br />

@V<br />

(x) (x) (3.31)<br />

@x<br />

where i and j range from zero to 1. If u (0) (x) asymptotically stabilizes the system<br />

_x = f + gu (0) on IR n , then Equations (3.29), (3.30) and (3.31) converge to<br />

the solution of the HJI equation pointwise on as shown in [5]. V (ij) (x) is again<br />

approximated via a global Galerkin approximation scheme where the coe cients are<br />

found by solving the algebraic Galerkin equation:<br />

Z @V (ij)T<br />

@x<br />

(f + gu + kw)+l + kuk 2<br />

R<br />

!<br />

k dx =0<br />

k = 1:::N and is found via a bisection search algorithm. The le hji.m is a<br />

straightforward implementation of this algorithm in Matlab code, where the integrals<br />

are computed numerically using Matlab's quad function. This software package again<br />

only requires three things: the set , the basis elements f jg N<br />

j=1, and an initial<br />

stabilizing control u (0) . To implement this control in hardware we used the following:<br />

=[;:2:2] [;2:5 2:5] [; 2 2 ] [;20 20]<br />

f jg = fx 2<br />

1x 2<br />

2x 2<br />

3x 2<br />

4x 1x 2x 1x 3x 1x 4x 2x 3x 2x 4x 3x 4g<br />

u (0) (x) =41y ; 1:5_y ; 2:6 ; :19 _ :<br />

We used the same basis functions, , and initial control as in the HJB case. This<br />

control strategy has all of the strengths of the HJB law, and it additionally improves<br />

the robustness of the control.<br />

28


Chapter 4<br />

Simulation Results<br />

4.1 The Simulated Testbed<br />

It is di cult to nd a completely objective way to compare the performance<br />

of di erent control algorithms, and in this comparison we have tried to take the<br />

simplest possible approach in evaluating the six regulation strategies. In order to<br />

ensure that the same initial disturbance was present in each trial, the system was<br />

rst excited in an open-loop fashion by applying a voltage pulse to the exible beam<br />

system's motor. This initial disturbance causes the exible beam to begin oscillating<br />

the objective of the feedback controls is now to damp the oscillation as quickly as<br />

possible. Figures 4.1 and 4.2 show the open loop control e ort and the open loop<br />

response { the case where the loop is never closed. Notice that the disturbance signal<br />

in Figure 4.1 dies out before t = 2:5 seconds: it doesn't interfere with the control<br />

algorithms. Figure 4.2 shows that the NLBP mathematical model does not include<br />

any damping terms, whereas the actual FBS exhibits signi cant natural damping of<br />

the beam's oscillations.<br />

At t = 2:5 seconds the loop is closed and the feedback control law begins<br />

operating. In these simulated results, a saturation block has been added so that the<br />

control signal will not exceed plus or minus 7 Volts { exactly as in the real exible<br />

beam system. In fact, the SIMULINK models in simulation are exactly like the actual<br />

real-time SIMULINK models except that the mathematical model for the plant is<br />

used instead of the actual plant. The expectation is that the simulated results will<br />

provide a more theoretical comparison: a test without modeling errors or exogenous<br />

29


Control Signal in Volts<br />

Response in cm<br />

2<br />

1.5<br />

1<br />

0.5<br />

0<br />

−0.5<br />

−1<br />

−1.5<br />

−2<br />

0 1 2 3 4 5<br />

time<br />

6 7 8 9 10<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Figure 4.1: Initial Open Loop Disturbance<br />

−4<br />

0 1 2 3 4 5<br />

time<br />

6 7 8 9 10<br />

Figure 4.2: Initial Open Loop Response<br />

30


disturbances, whereas the hardware results will indicate the controls' performance in<br />

the presence of modeling errors, sensor noise, and other unmodelled disturbances.<br />

Plots showing how each of the six control laws compares to each of the others<br />

are included and discussed, but it is vital to have a more quantitative measure of<br />

performance. To this end a couple of simple methods were chosen to quantify the<br />

performance of the algorithms. First, a simple integrator was added that returns the<br />

total energy of the rst state (the beam's de ection) from t =2:5 to t = 10 seconds.<br />

This quanti es how quickly the oscillations died out which is the primary objective<br />

of the control laws. This sum also includes a measure of the steady state error of the<br />

controls, because most of the control laws achieved their steady states before t = 8<br />

seconds. Second, an integrator returned the sum of the squared input signal from<br />

t = 2:5 to t = 10 seconds. This number represents the amount of energy used to<br />

implement the control: it provides an estimate of the control e ort, and it may thus<br />

be used to evaluate the e ciency of the control laws (i.e. if two control designs had<br />

comparable damping, which used less control e ort to achieve this damping?) These<br />

results are compared at the end of this section in tabular form.<br />

All of the simulations were conducted on the same SIMULINK model of the<br />

plant. Each of the control techniques was tuned extensively using the information<br />

from the simulations to yield the best possible performance. It should be noted that<br />

this process of tuning, of choosing the parameters in order to implement the controls<br />

at an optimal level, is a highly subjective activity. In fact there is no way to be sure<br />

that the parameters nally used yielded the best possible performance for the given<br />

implementation of the control strategy, though the utmost care was taken to try and<br />

achieve the best possible performance with each of the design strategies.<br />

4.2 Evaluation of the Plots in Simulation<br />

4.2.1 Linearized Optimal H 2<br />

The standard linear quadratic regulation method successfully regulated the<br />

system, a fact which perhaps argues against this physical system as a benchmark<br />

for non-linear systems: the whole purpose of nonlinear control theory is to provide<br />

31


Response in cm<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Linearized Optimal<br />

Open Loop Response<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

Figure 4.3: Linearized Optimal vs. Open Loop Response<br />

solutions to plants that are di cult to regulate using the standard linear approaches.<br />

Figure 4.3 shows the LQR design's performance plotted against the open loop re-<br />

sponse.<br />

This linearized H 2 approach gave almost the best performance in simulations.<br />

This is reasonable since the initial disturbance was small, and the system stayed within<br />

a region where linearization yields a good estimate of the true system dynamics.<br />

It should be said, though, that this control relied on relatively high gains (Kc =<br />

[;658 69:8 26 3:7]), and it was later discovered that such gains resulted in actuator<br />

dysfunction and instability on the hardware system. This really highlights the goal<br />

of nonlinear control design research: to nd control laws that will e ectively regulate<br />

systems when they operate in the presence of real-world nonlinearities or when they<br />

operate well beyond their linear region.<br />

32


Response in cm<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Linearized Robust<br />

Open Loop Response<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

4.2.2 Linearized H1<br />

Figure 4.4: Linear H1 vs. Open Loop Response<br />

The linearized H1 control also gave good performance, though it could not<br />

match the performance of the linearized H 2 controller. In fact, this linearized H1<br />

control could match neither the H 2 SGA nor the Backstepping control's performance.<br />

Figure 4.5 shows that the price of a more robustness is less performance.<br />

The gains produced by the linearized H1 design (Kc = [;62 16:7 1:5:46])<br />

were muchlower than those of the LQR design. These gains were, in fact, more robust<br />

at least in the sense that they produced a control that succeeded in stabilizing the<br />

actual exible beam system unlike the gains produced by the linearized H 2 approach.<br />

4.2.3 Passivity Based Control<br />

As seen in in Figure 4.6, the passivity based control exhibited some strange<br />

behavior: after suddenly attenuating the vibration at about t = 6 the oscillations then<br />

increase a little before dying out. Though this control is still a dramatic improvement<br />

on the open loop response, both of the linearized controllers provided better damping<br />

33


Response in cm<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Linearized Robust<br />

Linearized Optimal<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

Figure 4.5: Linear H1 vs. Linearized Optimal<br />

than this approach in the simulations. Figures 4.7 and 4.8 show that the linearized<br />

designs attenuated the vibrations faster.<br />

A possible explanation for the strange envelope of the rst state's oscillations,<br />

is that the dynamics in the feedback loop act as an energy shaping block. It is possible<br />

that, similar to the way the energy in two coupled pendulums is transfered back and<br />

forth before damping out, energy is being coupled back and forth between the control<br />

and the system dynamics, while being steadily absorbed by the controller's virtual<br />

damping e ect.<br />

4.2.4 Backstepping Algorithm<br />

The backstepping control design performed very well { it seemed to damp<br />

the oscillations quicker than any of the other designs in simulation. Figures 4.10<br />

through 4.12 show that it attenuated the vibrations faster than the other control<br />

designs. There were, however, two anomalies associated with this controller: it ex-<br />

pended a great deal more control e ort than the previous designs, and it began by<br />

34


Response in cm<br />

Response in cm<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Passivity Based<br />

Open Loop Response<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Figure 4.6: Passivity Based Control vs. Open Loop Response<br />

Passivity Based<br />

Linearized Optimal<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

Figure 4.7: Passivity Based Control vs. Linear Optimal Control<br />

35


Response in cm<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Passivity Based<br />

Linearized Robust<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

Figure 4.8: Passivity Based Control vs. Linear Robust Control<br />

increasing the amplitude of the rst oscillation after it was turned on (notice in Fig-<br />

ure 4.9 how the rst oscillation after the loop was closed is greater than the open loop<br />

response).<br />

The unusually high rst state energy ( R x 2<br />

1dt) number of the backstepping<br />

control as shown in 4.1 is actually the direct result of this initial perturbation in the<br />

beam's motion, and it is therefore misleading. It's strong attenuation of the beam's<br />

oscillations, clearly seen in Figures 4.10 through 4.12, is a direct result from the<br />

fact that this control design used more control energy than the other designs. This<br />

became a weakness on the experimental testbed, however, because it's high input<br />

signal requirements were too demanding on the electric motor, and it was unable to<br />

e ectively regulate the FBS.<br />

4.2.5 SGA: Nonlinear H 2 Optimal Control<br />

The SGA technique performed reasonably well in simulation. It regulated the<br />

NLBP faster than the linearized H1 and the PBC controller as seen in Figures 4.15<br />

and 4.16, however it was not as e ective as the backstepping (Figure 4.17 or the<br />

36


Response in cm<br />

Response in cm<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Backstepping<br />

Open Loop Response<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Figure 4.9: Backstepping vs. Open Loop Response<br />

Backstepping<br />

Linearized Optimal<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

Figure 4.10: Backstepping vs. Linear Optimal Control<br />

37


Response in cm<br />

Response in cm<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Backstepping<br />

Linearized Robust<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Figure 4.11: Backstepping vs. Linear Robust Control<br />

Backstepping<br />

Passivity Based<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

Figure 4.12: Backstepping vs. Passivity Based Control<br />

38


Response in cm<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

SGA: Nonlinear Optimal<br />

Open Loop Response<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

Figure 4.13: SGA: Nonlinear H 2 vs. Open Loop Response<br />

linearized H 2 (Figure 4.14 controllers in damping the vibrating beam. It did not use<br />

as much control e ort as either of these controllers, however, so it did use control<br />

e ort very e ciently.<br />

Since this control was fully nonlinear, in the sense that it was a nonlinear<br />

function of the state of the system, it was hoped that it could outperform the standard<br />

linearized approach. The fact that the LQR control provided better performance<br />

could be attributed to the fact that not enough basis functions were used, or perhaps<br />

the wrong basis functions were used. Only polynomials were used for basis functions,<br />

but logarithms or exponential functions might have performed better.<br />

4.2.6 SGA: Nonlinear H1 Control<br />

The performance of the SGA to the HJI equation is comparable to that of<br />

the SGA to the HJB equation (see Figure 4.23). It seems that in both the H 2 and<br />

the H1 cases, the linear LQR design yields better overall performance, as shown in<br />

Figure 4.19. It was hoped though, that the nonlinear solutions would perform better<br />

in hardware { the added uncertainties should demand more selective controllers.<br />

39


Response in cm<br />

Response in cm<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

SGA: Nonlinear Optimal<br />

Linearized Optimal<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Figure 4.14: SGA: Nonlinear H 2 vs. Linear Optimal control<br />

SGA: Nonlinear Optimal<br />

Linearized Robust<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

Figure 4.15: SGA: Nonlinear H 2 vs. Linear Robust Control<br />

40


Response in cm<br />

Response in cm<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

SGA: Nonlinear Optimal<br />

Passivity Based Control<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Figure 4.16: SGA: Nonlinear H 2 vs. Passivity Based Control<br />

SGA: Nonlinear Optimal<br />

Backstepping<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

Figure 4.17: SGA: Nonlinear H 2 vs. Backstepping Control<br />

41


Response in cm<br />

Response in cm<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

SGA: Nonlinear Robust<br />

Open Loop Response<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Figure 4.18: SGA: Nonlinear H1 vs. Open Loop Response<br />

SGA: Nonlinear Robust<br />

Linearized Optimal<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

Figure 4.19: SGA: Nonlinear H1 vs. Linear Optimal Control<br />

42


Response in cm<br />

Response in cm<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

SGA: Nonlinear Robust<br />

Linearized Robust<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Figure 4.20: SGA: Nonlinear H1 vs. Linear Robust Control<br />

SGA: Nonlinear Robust<br />

Passivity Based<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

Figure 4.21: SGA: Nonlinear H1 vs. Passivity Based Control<br />

43


Response in cm<br />

Response in cm<br />

5<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

SGA: Nonlinear Robust<br />

Backstepping<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

4<br />

3<br />

2<br />

1<br />

0<br />

−1<br />

−2<br />

−3<br />

Figure 4.22: SGA: Nonlinear H1 vs. Backstepping Control<br />

SGA: Nonlinear Robust<br />

SGA: Nonlinear Optimal<br />

−4<br />

0 1 2 3 4 5<br />

Time<br />

6 7 8 9 10<br />

Figure 4.23: SGA: Nonlinear H1 vs. SGA: Nonlinear H 2<br />

44


It is seen in Figure 4.23 that the performance of the nonlinear H1 solution<br />

and the nonlinear H 2 solution provide similar performance, with the robust design<br />

being slightly inferior, similar to the linearized case. The question becomes now,<br />

what exactly is gained by using such 'robust' design methodologies? As is shown<br />

in Chapter 5, the H1 designs do provide better performance on the hardware, a<br />

vindication of robust design methodology.<br />

4.3 Tabulated Results<br />

Table 4.1 shows how the numbers compare for the various control strate-<br />

gies. The linearized H 2 was easily the best performer, while the passivity-based<br />

control yielded the worst results. The backstepping algorithm was the best oscillation<br />

damper, despite its high rst-state energy, but it also used a lot more control e ort<br />

than the other designs. Disappointingly, the nonlinear control laws do not particu-<br />

larly stand out as notably superior to the linearized ones: perhaps the TORA system<br />

is too easily stabilized and not appropriate for a nonlinear benchmark problem.<br />

Table 4.1: Tabular Comparison of Simulated Results<br />

LQR Lin. H1 PBC Backstep HJB HJI<br />

R y(t) 2 dt(cm 2 ) 3.06 5.72 11.2 6.03 4.12 5.28<br />

R u(t) 2 dt(V 2 ) 4.05 4.39 .945 21.12 4.5 1.366<br />

4.4 Tuning and Ease of Implementation<br />

The easiest controls to tune are the linearized optimal control laws. Both LQR<br />

and the optimal H1 require only the adjustment of the weighting matrix Q, and to<br />

rapidly test the new values is as simple as loading the new feedback gain vector, Kc,<br />

onto the software workspace. These controls are also the easiest to implement, as they<br />

require only the linearization of the state-space model, and the solution of a Riccati<br />

45


equation. The Matlab commands lqr and hinf automate the entire procedure. The<br />

Galerkin approximations are also fairly easy to implement, requiring only the e ort to<br />

master the software package. All of the work is then done by the algorithm. Choosing<br />

an appropriate required several attempts: a balance must be struck between having<br />

the largest possible region of stabilityandhaving improved performance on the region<br />

of operation.<br />

The backstepping control is by far the most di cult to implement, as it requires<br />

a good deal of e ort to do all of the variable transformations and to create the stable<br />

control Lyapunov functions. Tuning for this algorithm is also di cult due to the<br />

complexity of the feedback law: it is not clear how changing the weighting coe cients<br />

will a ect the control signal (i.e. in the LQR and SGA designs Q and R weight the<br />

cost of the states, there's no such physical intuition here.)<br />

The passivity based control is also relatively di cult to implement, and its<br />

results are the poorest of the test group. It is also di cult to tune this control its<br />

performance is very sensitive to changes in the feedback loop parameters, a b k 1<br />

and k 2, and it took the longest time to adjust this control in order to provide an<br />

acceptable response. Perhaps, its poor performance is due to an inappropriate choice<br />

of these parameters, but tuning this control is unclear at best { raising the gains does<br />

not translate into more instability and better performance. In fact, the nal design<br />

is the result of lowering the values of k 1 and k 2.<br />

4.5 Robustness Analysis<br />

The true test of any given algorithm's robustness lies in how it will perform in<br />

hardware: imperfect sensors, actuators, and unmodelled disturbances and dynamics<br />

will always degrade control e orts. So in many ways, the best measure of robustness<br />

for the six control designs studied is to examine how their simulated results compare<br />

with their results as tested in hardware. Another way to gauge robustness, especially<br />

with respect to modeling errors, is to design for a speci c set of system parameters, run<br />

the control on a system with di erent parameter values, and measure the degradation<br />

in the performance. This was easily accomplished with the TORA system, because<br />

46


all of the physical parameters except for the motor constants can be combined into<br />

the term as described in Chapter 1 = p mc<br />

(I+mc2 )(m+M )<br />

. All of the controllers<br />

were designed for the actual value of = :2. The simulations were then run with<br />

varying values of until the controller no longer regulated the system. Thus, the<br />

more robust controllers could tolerate greater changes in epsilon than the less robust<br />

designs. In Table 4.2 below, the maximum and minimum values of that the control<br />

designs could regulate, given that they were assuming a plant value of = :2, are<br />

compared:<br />

Table 4.2: Tabular Comparison of Simulated Robustness<br />

LQR Lin. H1 PBC Backstep HJB HJI<br />

max .277 .275 .245 .251 .262 .275<br />

min .016 .015 .014 .015 .102 .093<br />

Interestingly, the linearized controls outperformed the nonlinear designs in this<br />

simulation as well: the linearized LQR control had the highest tolerance for raising<br />

the value of in the system, with the linearized H1 and the SGA to the nonlinear<br />

H1 yielding only slightly less robustness with respect to this upper bound. On the<br />

lower bound, the system value for could be lowered to = :014 before the passivity<br />

based control failed to regulate the system, over a 1000 percent change in the value<br />

of ! In fact, all of the controls gave similarly robust results for the lower bound of ,<br />

except the Galerkin approximations whose lower bounds were notably inferior.<br />

Robustness is a property that is very di cult to measure precisely, and the<br />

above simulated results only give a rough idea of how the controllers behave when<br />

there are signi cant modeling errors present. It should be noted that they all exhibit<br />

a wide range of values for , wherein the controls continue to regulate the system,<br />

and they are all, therefore, meaningfully robust.<br />

47


Chapter 5<br />

Experimental Results<br />

5.1 Testbed and Open Loop Response<br />

The experimental results obtained from the actual exible beam system are<br />

similar to those obtained in simulation { a veri cation of the functionality of the<br />

mathematical model used. In the actual hardware experiments, however, the nonlin-<br />

ear control strategies clearly outperform the linearized controls: it is a validation of<br />

the true robustness of these nonlinear algorithms and their design methodologies.<br />

The experiments conducted on the FBS use the same testing method that<br />

was introduced in the simulation: an initial voltage pulse begins an oscillation, and<br />

at time = 2 seconds the control is switched on to dampen the disturbance. As<br />

shown in the plot of the open loop response, there is substantial unmodelled damping<br />

present in the system, therefore the plots only show the response through time =<br />

6 seconds. Due to some software incompatibilities and some di culty in tuning<br />

(the backstepping designs either went unstable or made the system's motor grind),<br />

the backstepping control was not implemented and tested on the hardware system,<br />

but the ve remaining control laws were each tuned to provide the best possible<br />

performance in hardware. As before, the results of each of the control strategies will<br />

be compared to each of the others as well as to the open-loop response of the plant.<br />

5.2 Linearized Optimal and Robust Controls<br />

The linearized controls, which appeared to be among the best feedback choices<br />

in simulation, show lack-luster results when implemented in hardware. The linearized<br />

49


Response in m<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

Open Loop Response<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

Figure 5.1: Open Loop Response of the FBS<br />

H 2 control did not even stabilize the system until its gains were recomputed, and its<br />

nal gains were substantially lower (Kc =[;45 11:3 1:8:32]) than those used in the<br />

simulations.<br />

Figure 5.4 shows that the linearized H1 controller outperformed the H 2 design<br />

by a small margin. Clearly the unmodelled dynamics and disturbances in the system<br />

are enough to give the robust design an advantage when applied to a real, physical<br />

plant.<br />

5.3 Passivity Based Control<br />

The passivity-based algorithm's results in hardware are very similar to its<br />

simulated response. It displays the same unusual rebound behavior: it damps the<br />

vibration rmly in the rst seconds, then the amplitude of the vibrations increases<br />

before nally dying out (see Figure 5.5). This phenomenon, though not a serious e ect<br />

in this experiment, could be a serious liability in situations where a monotonically<br />

decreasing response envelope is required. The passivity-based design yields better<br />

performance than the linearized controls as measured by the energy in the rst state,<br />

50


Response in m<br />

Response in m<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

Linear Optimal<br />

Open Loop Response<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

Figure 5.2: Linear Optimal vs. Open Loop<br />

Linear Robust<br />

Open Loop Response<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

Figure 5.3: Linear Robust vs. Open Loop<br />

51


Response in m<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

Linear Robust<br />

Linear Optimal<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

Figure 5.4: Linear Robust vs. Linear Optimal<br />

though Figures 5.6 and 5.7 show that the linearized controllers' attenuation was more<br />

uniform.<br />

Since its performance increases with respect to the linearized designs in going<br />

from simulation to the hardware testbed, the passivity based control design is justi ed.<br />

Its robustness is also apparent from its portability between simulation and hardware.<br />

5.4 Successive Galerkin Approximations<br />

The successive Galerkin approximations yields the best results in hardware.<br />

Both the HJB solution and the HJI solution outperform the linearized controlsaswell<br />

as the passivity based control, as shown in Figures 5.9, 5.10, and 5.11. Figure 5.16<br />

shows that the nonlinear robust approximation slightly outperforms the HJB solution,<br />

perhaps because its design emphasizes robustness with respect to the unmodelled<br />

e ects of the exible beam.<br />

The SGA method succeeds in outperforming standard linear approaches as<br />

well as a passivity based design. Its implementation is straightforward, and it is eas-<br />

ily tuned to optimize its performance. Its excellent performance in hardware speaks<br />

52


Response in m<br />

Response in m<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

Passivity Based<br />

Open Loop<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

Figure 5.5: Passivity Based vs. Open Loop<br />

Passivity Based<br />

Linear Optimal<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

Figure 5.6: Passivity Based vs. Linear Optimal<br />

53


Response in m<br />

Response in m<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

Passivity Based<br />

Linear Robust<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

Figure 5.7: Passivity Based vs. Linear Robust<br />

SGA: Nonlinear Optimal<br />

Open Loop<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

Figure 5.8: SGA: Nonlinear H 2 vs. Open Loop<br />

54


Response in m<br />

Response in m<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

SGA: Nonlinear Optimal<br />

Linear Optimal<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

Figure 5.9: SGA: Nonlinear H 2 vs. Linear Optimal<br />

SGA: Nonlinear Optimal<br />

Linear Robust<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

Figure 5.10: SGA: Nonlinear H 2 vs. Linear Robust<br />

55


Response in m<br />

Response in m<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

SGA: Nonlinear Optimal<br />

Passivity Based<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

Figure 5.11: SGA: Nonlinear H 2 vs. Passivity Based<br />

SGA: Nonlinear Robust<br />

Open Loop<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

Figure 5.12: SGA: Nonlinear H1 vs. Open Loop<br />

56


Response in m<br />

Response in m<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

SGA: Nonlinear Robust<br />

Linear Optimal<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

Figure 5.13: SGA: Nonlinear H1 vs. Linear Optimal<br />

SGA: Nonlinear Robust<br />

Linear Robust<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

Figure 5.14: SGA: Nonlinear H1 vs. Linear Robust<br />

57


Response in m<br />

Response in m<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

SGA: Nonlinear Robust<br />

Passivity Based<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

0.05<br />

0.04<br />

0.03<br />

0.02<br />

0.01<br />

0<br />

−0.01<br />

−0.02<br />

Figure 5.15: SGA: Nonlinear H1 vs. Passivity Based<br />

SGA: Nonlinear Robust<br />

SGA: Nonlinear Optimal<br />

−0.03<br />

0 1 2 3 4 5 6 7<br />

Time<br />

Figure 5.16: SGA: Nonlinear H1 vs. SGA: Nonlinear H 2<br />

58


for its true robustness, and its only drawback is the care with which one must choose<br />

appropriate basis functions for the approximation and an appropriate region of sta-<br />

bility, .<br />

5.5 Tabulated Results<br />

Table 5.1 summarizes the performance of the control strategies implemented<br />

on the FBS. The rst row compares the e ective exponential decay rates { these<br />

values were computed by doing a least-square t of the rst three local maxima of<br />

the response curves. The PBC control shows an unusual perturbation in its response<br />

envelope, and its decay number is therefore not meaningful. The second row shows<br />

the integral of the total energy in the linear position state, y, from the time the<br />

control is turned on till the steady-state is reached at about 6 seconds. The third row<br />

compares the control e ort by giving the integral of the control signal from 2 to 6<br />

seconds. (In the hardware tests steady-state error is non-existent due to the natural<br />

damping of the system, therefore the integrator is only run to t =6seconds.)<br />

Table 5.1: Tabular Comparison of Experimental Results<br />

Linear Optimal Linear H1 Passivity HJB HJI<br />

: e ; t 1.88 1.88 1.89 1.90 1.91<br />

R y(t) 2 dt 1.71 1.71 1.63 1.41 1.1<br />

R u(t) 2 dt .23 .33 1.35 .3 .44<br />

Clearly the best control is the non-linear H1 (SGA/HJI) solution. It outperforms<br />

the other control laws without using as much control e ort as the PBC control. The<br />

next best control is the non-linear H 2 (SGA/HJB) solution. It does not attenuate the<br />

disturbance as fast as the HJI solution, but it also uses less control e ort. These two<br />

SGA-based controllers produced nonlinear feedback control laws that were noticeably<br />

superior to the linearized controllers. PBC performs better than both of the linearized<br />

controls, but it uses an unusual amount of control e ort. An advantage of the PBC<br />

59


control was that it requires only the angle as an input (no state estimation), though<br />

perhaps this explains why it is not very e cient in its use of control e ort.<br />

60


Chapter 6<br />

Conclusion and Future Work<br />

6.1 Overview<br />

The exible beam system is a nonlinear system subject to the higher-order<br />

dynamics of the beam's motion, as well as to the non-linearity caused by the coupling<br />

between the rotational actuator and the quasi-linear motion of the mass at the end<br />

of the beam. Despite all of these nonlinearities and despite all of the unmodelled<br />

dynamics, the linearized approaches work adequately and damp the beam's vibration.<br />

It was gratifying to note, however, that the full nonlinear approaches did prove tobe<br />

better controllers when implemented in hardware.<br />

The successive Galerkin approximations to the HJB and HJI equations pro-<br />

duce control algorithms that e ciently regulate the non-linear benchmark problem.<br />

The performance of these algorithms as implemented on the FBS is superior to the<br />

performance of standard linearized controllers as well as a passivity-based design and<br />

a backstepping design. All of the control strategies studied produce robust, stabi-<br />

lizing designs, though the passivity based approach is very sensitive to its feedback<br />

parameters.<br />

In implementing this broad sample of nonlinear control algorithms, the similar-<br />

ities and di erences of the studied approaches become more apparent. The standard<br />

linearized optimal and robust approaches are simply ways of computing the appro-<br />

priate state-feedback gains so that the system is optimally regulated and robust with<br />

respect to a given cost function. This approach is thus dependent on an expert fa-<br />

miliar with the system to choose an appropriate cost function. This is very similar<br />

61


to the Galerkin approximation technique, where again an optimal set of coe cients<br />

is sought that will regulate the system optimally in some region of the state space,<br />

. The di erence is that the linearized controls are only guaranteed to locally stabi-<br />

lize the system, whereas the Galerkin approximations take into account more global<br />

information. Also, the SGA technique creates a truly nonlinear control signal, and<br />

this translates into better and more robust performance when implemented on the<br />

exible beam set-up.<br />

On the other hand, the backstepping approach uses a Lyapunov approach {<br />

building a control Lyapunov function step by step, then implementing a very complex<br />

and non-linear control that is a composite of the control signals at the various steps.<br />

The passivity-based strategy seeks to exploit the structure and ow of the<br />

energy in the system by adding a dynamic in the feedback loop.This control dynamic<br />

is chosen to add damping arti cially to the closed-loop system and to shape the<br />

potential energy function for closed-loop stability andperformance.<br />

6.2 Extensions to this Research<br />

Some interesting things have come to light throughout the testing of these<br />

controls: there does not seem to be any advantage of adding higher order terms to<br />

the Galerkin approximations. As higher power terms are added to the basis functions<br />

there was a slight degradation in performance. This would be an excellent extension<br />

to this research: to discover why adding higher power terms to the approximation<br />

does not improve the control. Also, it would be valuable to experiment with non-<br />

polynomial basis functions, to see if other functions might provide better control<br />

signals perhaps exponential, logarithmic, Bessel, or other functions would generate<br />

useful nonlinearities in the feedback signal.<br />

The selection of an appropriate and appropriate basis functions is also an<br />

area where more research should be conducted. It was observed that changing the<br />

region of stability, , could signi cantly improve or degrade the control. Future<br />

research might explore ways of mathematically determining the optimal size of for<br />

a given demand of robustness and performance within a given region of the state<br />

62


space. As the number of states in the system increases, the number of basis functions<br />

required to implement a given order of approximation grows exponentially. This<br />

means that for systems with a large number of states, a technique must be found for<br />

selecting only the higher order basis functions that provide useful information and<br />

that will translate into e ective elements of the control signal. Future research could<br />

be done to determine how to automate such a procedure, and this might alsoleadto<br />

an understanding of why some higher order basis functions degrade performance.<br />

63


Bibliography<br />

[1] P. Kokotovic M. Jankovic, D. Fontaine, \Tora example, cascade and passivity<br />

control designs", in Proceedings of the American Control Conference, Seattle,<br />

WA, June 1995, pp. 4363{4367.<br />

[2] Robert T. Bupp, Dennis S. Bernstein, and Vincent T. Coppola, \A benchmark<br />

problem for nonlinear control design: Problem statement, experimental testbed,<br />

and passive nonlinear compensation", in Proceedings of the American Control<br />

Conference, Seattle, WA, June 1995, pp. 4363{4367.<br />

[3] Randal Beard, Improving the Closed-Loop Performance of Nonlinear Systems,<br />

PhD thesis, Rensselaer Polytechnic Institute, Troy, New York, 1995.<br />

[4] Randal Beard, George Saridis, and John Wen, \Improving the performance of<br />

stabilizing control for nonlinear systems", Control Systems Magazine, vol. 16,<br />

no. 5, pp. 27{35, October 1996.<br />

[5] Randal W. Beard and Timothy W. McLain, \Successive Galerkin approximation<br />

algorithms for nonlinear optimal and robust control", International Journal of<br />

Control: Special Issue on Breakthroughs in the Control of Nonlinear Systems,<br />

vol. 71, no. 5, pp. 717{743, 1998.<br />

[6] R.J. Kinsey R.H. Rand and D.L. Mingori, \Dynamics of spinup through res-<br />

onance", International Journal of Non-Linear Mechanics, vol. 27, no. 3, pp.<br />

489{502, 1992.<br />

[7] D.S. Bernstein C.J. Wan and V.T. Coppola, \Global stabilization of the oscillat-<br />

ing eccentric rotor", in Proceedings of IEEE Conference ofDecision and Control,<br />

Orlando,FL, 1994, pp. 4024{4029.<br />

65


[8] V.T. Coppola R.T. Bupp and D.S. Bernstein, \Vibration suppression of multi-<br />

modal translational motion using a rotational actuator", in Proceedings of the<br />

American Control Conference, Seattle, WA, June 1995, pp. 4363{4367.<br />

[9] H. Sira-Ramirez G. Escobar, R. Ortega, \Ouput-feedback global stabilization<br />

of a nonlinear benchmark system using a saturated passivity-based controller",<br />

IEEE Transactions on Control Systems Technology, vol. 7, no. 2, pp. 289{293,<br />

March 1999.<br />

[10] R. Kelly R. Ortega, A. Loria and L. Praly, \On passivity-based output feedback<br />

global stabilization of euler-lagrange systems", Int. J. of Robust and Nonlinear<br />

Contr., vol. 5, pp. 313{323, 1995.<br />

[11] R. Ortega-V. Santibanez A. Loria, R. Kelly, \On global output feedback regu-<br />

lation of euler-lagrange systems with bounded inputs", IEEE Transactions on<br />

Automatic Control, vol. 42, pp. 1137{1143, 1997.<br />

[12] P. Kokotovic A. Saberi and H. Sussmann, \Global stabilization of partially linear<br />

composite systems", SIAM Journal on Control and Optimization, vol. 28, pp.<br />

1491{1503, 1990.<br />

[13] C. Abdallah P. Dorato and V. Cerone, Linear-Quadratic Control: An Introduc-<br />

tion, Prentice Hall, Englewood Cli s, New Jersey, 1995.<br />

[14] Michael Green and David J. N. Limebeer, Linear Robust Control, Prentice Hall,<br />

1995.<br />

[15] Gerando Escobar, Romeo Ortega, and Herbertt Sira-Ramirez, \Output-feedback<br />

global stabilization of a nonlinear benchmark system using a saturated passivity-<br />

based controller", IEEE Transactions on Automatic Control, vol. 7, no. 2, pp.<br />

289{293, March 1999.<br />

[16] George N. Saridis and Chun-Sing G. Lee, \An approximation theory of optimal<br />

control for trainable manipulators", IEEE Transactions on Systems, Man, and<br />

Cybernetics, vol. 9, no. 3, pp. 152{159, March 1979.<br />

66

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!