The Polynomial Toolbox for MATLAB - DCE FEL ČVUT v Praze
The Polynomial Toolbox for MATLAB - DCE FEL ČVUT v Praze
The Polynomial Toolbox for MATLAB - DCE FEL ČVUT v Praze
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
PolyX, Ltd<br />
E-mail: info@polyx.com<br />
Support: support@polyx.com<br />
Sales: sales@polyx.com<br />
Web www.polyx.com<br />
Tel. +420-603-844-561, +420-233-323-801<br />
Fax +420-233-323-802<br />
Jarni 4<br />
Prague 6, 16000<br />
Czech Republic<br />
<strong>Polynomial</strong> <strong>Toolbox</strong> 3.0 Manual<br />
© COPYRIGHT 2009 by PolyX, Ltd.<br />
<strong>The</strong> software described in this document is furnished under a license agreement.<br />
<strong>The</strong> software may be used or copied only under the terms of the license agreement.<br />
No part of this manual may be photocopied or reproduced in any <strong>for</strong>m without prior<br />
written consent from PolyX, Ltd.<br />
Printing history: September 2009. First printing.
Contents<br />
1 Quick Start ........................................................................................................ 8<br />
Initialization .................................................................................................................... 8<br />
Help ................................................................................................................................. 8<br />
How to define a polynomial matrix .............................................................................. 9<br />
Simple operations with polynomial matrices ............................................................. 10<br />
Addition, subtraction and multiplication ........................................................ 10<br />
Entrywise and matrix division ........................................................................ 11<br />
Operations with fractions ................................................................................ 11<br />
Concatenation and working with submatrices ................................................ 12<br />
Coefficients and coefficient matrices .............................................................. 13<br />
Conjugation and transposition ........................................................................ 13<br />
Advanced operations and functions ........................................................................... 14<br />
2 <strong>Polynomial</strong> matrices ....................................................................................... 16<br />
Introduction .................................................................................................................. 16<br />
<strong>Polynomial</strong>s and polynomial matrices ........................................................................ 16<br />
Entering polynomial matrices ..................................................................................... 16<br />
<strong>The</strong> pol command ......................................................................................... 17<br />
<strong>The</strong> <strong>Polynomial</strong> Matrix Editor ........................................................................ 17<br />
<strong>The</strong> default indeterminate variable ................................................................. 17<br />
Changing the default indeterminate variable .................................................. 18<br />
Basic manipulations with polynomial matrices ......................................................... 19<br />
Concatenation and working with submatrices ................................................ 19<br />
Coefficients ..................................................................................................... 19<br />
Degrees and leading coefficients .................................................................... 20<br />
Constant, zero and empty matrices ................................................................. 21<br />
Values ............................................................................................................. 21<br />
Derivative and integral .................................................................................... 21<br />
Arithmetic operations on polynomial matrices ......................................................... 22<br />
Addition, subtraction and multiplication ........................................................ 22<br />
Determinants, unimodularity and adjoints ...................................................... 23
Rank ................................................................................................................ 25<br />
Bases and null spaces ...................................................................................... 26<br />
Roots and stability ........................................................................................... 27<br />
Special constant matrices related to polynomials ........................................... 29<br />
Divisors and multiples.................................................................................................. 30<br />
Scalar divisor and multiple ............................................................................. 30<br />
Division ........................................................................................................... 31<br />
Division with remainder ................................................................................. 31<br />
Greatest common divisor ................................................................................ 31<br />
Coprimeness .................................................................................................... 31<br />
Least common multiple ................................................................................... 32<br />
Matrix divisors and multiples ......................................................................... 33<br />
Matrix division ................................................................................................ 33<br />
Matrix division with remainder ...................................................................... 33<br />
Greatest common left divisor .......................................................................... 34<br />
Least common right multiple .......................................................................... 35<br />
Dual concepts .................................................................................................. 36<br />
Transposition and conjugation ................................................................................... 36<br />
Transposition ................................................................................................... 36<br />
Complex coefficients ...................................................................................... 36<br />
Conjugated transposition ................................................................................ 37<br />
Reduced and canonical <strong>for</strong>ms ..................................................................................... 38<br />
Row degrees .................................................................................................... 38<br />
Row and column reduced matrices ................................................................. 39<br />
Row reduced <strong>for</strong>m ........................................................................................... 39<br />
Triangular and staircase <strong>for</strong>m ......................................................................... 39<br />
Another triangular <strong>for</strong>m .................................................................................. 40<br />
Hermite <strong>for</strong>m ................................................................................................... 40<br />
Echelon <strong>for</strong>m ................................................................................................... 41<br />
Smith <strong>for</strong>m ...................................................................................................... 41<br />
Invariant polynomials ..................................................................................... 42<br />
<strong>Polynomial</strong> matrix equations ...................................................................................... 42<br />
Diophantine equations .................................................................................... 42<br />
Bézout equations ............................................................................................. 44
Matrix polynomial equations .......................................................................... 44<br />
One-sided equations ........................................................................................ 45<br />
Two-sided equations ....................................................................................... 45<br />
Factorizations ............................................................................................................... 46<br />
Symmetric polynomial matrices ..................................................................... 47<br />
Zeros of symmetric matrices ........................................................................... 47<br />
Spectral factorization ...................................................................................... 50<br />
Non-symmetric factorization .......................................................................... 52<br />
Matrix pencil routines.................................................................................................. 53<br />
Trans<strong>for</strong>mation to Kronecker canonical <strong>for</strong>m ................................................. 54<br />
Trans<strong>for</strong>mation to Clements <strong>for</strong>m ................................................................... 55<br />
Pencil Lyapunov equations ............................................................................. 55<br />
3 Discrete-time and two-sided polynomial matrices ........................................ 57<br />
Introduction .................................................................................................................. 57<br />
Basic operations ............................................................................................................ 57<br />
Two-sided polynomials ................................................................................... 57<br />
Leading and trailing degrees and coefficients ................................................ 57<br />
Derivatives and integrals ................................................................................. 58<br />
Roots and stability ........................................................................................... 59<br />
Norms .............................................................................................................. 59<br />
Conjugations ................................................................................................................. 59<br />
Conjugate transpose ........................................................................................ 60<br />
Symmetric two-sided polynomial matrices................................................... 61<br />
Zeros of symmetric two-sided polynomial matrices ....................................... 62<br />
Matrix equations with discrete-time and two-sided polynomials ............................ 65<br />
Symmetric bilateral matrix equations ............................................................. 65<br />
Non-symmetric equation ................................................................................. 66<br />
Discrete-time spectral factorization ................................................................ 66<br />
Resampling ................................................................................................................... 68<br />
Sampling period .............................................................................................. 68<br />
Resampling of polynomials in z -1 .................................................................... 69<br />
Resampling and phase ................................................................................... 69<br />
Resampling of two-sided polynomials ............................................................ 70
Resampling of polynomials in z ................................................................... 70<br />
Dilating ........................................................................................................... 70<br />
4 <strong>The</strong> <strong>Polynomial</strong> Matrix Editor ...................................................................... 72<br />
Introduction .................................................................................................................. 72<br />
Quick start .................................................................................................................... 72<br />
Main window ................................................................................................................ 73<br />
Main window buttons ..................................................................................... 73<br />
Main window menus ....................................................................................... 73<br />
Matrix Pad window ...................................................................................................... 74<br />
Matrix Pad buttons .......................................................................................... 74<br />
5 <strong>Polynomial</strong> matrix fractions .......................................................................... 76<br />
Introduction .................................................................................................................. 76<br />
Scalar-denominator-fractions ..................................................................................... 76<br />
<strong>The</strong> sdf command ......................................................................................... 78<br />
Comparison of fractions .................................................................................. 78<br />
Coprime ........................................................................................................... 79<br />
Reduce ............................................................................................................. 79<br />
General properties cop, red ...................................................................... 79<br />
Reverse ............................................................................................................ 80<br />
Matrix-denominator-fractions .................................................................................... 81<br />
<strong>The</strong> mdf command ......................................................................................... 82<br />
Left and right polynomial matrix fractions ............................................................... 82<br />
Right-denominator-fraction ............................................................................ 83<br />
<strong>The</strong> rdf command ......................................................................................... 83<br />
Left-denominator-fraction ............................................................................... 84<br />
<strong>The</strong> ldf command ......................................................................................... 84<br />
Mutual conversions of polynomial matrix fraction objects ...................................... 84<br />
Computing with polynomial matrix fractions ........................................................... 86<br />
Addition, subtraction and multiplication ........................................................ 86<br />
Entrywise and matrix division ........................................................................ 88<br />
Concatenation and working with submatrices ................................................ 89
Coefficients and coefficient matrices .............................................................. 90<br />
Conjugation and transposition ........................................................................ 90<br />
<strong>Polynomial</strong> matrix fractions, transfer functions and transfer function matrices .. 91<br />
Transfer functions ........................................................................................... 91<br />
Transfer function matrices .............................................................................. 92<br />
Transfer functions from state space model and vice versa ............................. 93<br />
Conversions of polynomial matrix fraction objects to and from other Matlab<br />
objects ............................................................................................................................ 95<br />
Conversion to the Control System <strong>Toolbox</strong> Objects ...................................... 96<br />
Conversion from the Control System <strong>Toolbox</strong> Objects ................................ 100<br />
Conversion to the Symbolic Math <strong>Toolbox</strong> .................................................. 102<br />
Conversion from the Symbolic Math <strong>Toolbox</strong> ............................................. 104<br />
6 Conclusions .................................................................................................. 107
1 Quick Start<br />
Initialization<br />
Help<br />
Every <strong>Polynomial</strong> <strong>Toolbox</strong> session starts with the initialization command pinit:<br />
pinit<br />
<strong>Polynomial</strong> <strong>Toolbox</strong> 3.0 initialized. To get started, type one of<br />
these: helpwin or poldesk. For product in<strong>for</strong>mation, visit<br />
www.polyx.com or www.polyx.cz.<br />
This function creates global polynomial properties and assigns them to their default<br />
values. If you place this command in your startup.m file then the <strong>Polynomial</strong><br />
<strong>Toolbox</strong> is automatically initialized. If you include lines such as<br />
path(path,'c:\Matlab\toolbox\polynomial')<br />
pinit<br />
in the startup.m file in the folder in which you start <strong>MATLAB</strong> then the toolbox is<br />
automatically included in the search path and initialized each time you start<br />
<strong>MATLAB</strong>.<br />
To see a list of all the commands and functions that are available type<br />
help polynomial<br />
To get help on any of the toolbox commands, such as axxab, type<br />
help axxab<br />
Some commands are overloaded, that is, there exist various commands of the same<br />
name <strong>for</strong> various objects. To get help on overloaded commands, you must also specify<br />
the object class of interest. So<br />
help pol/rank<br />
returns help on rank <strong>for</strong> pol objects (polynomial matrices) while<br />
help frac/rank<br />
returns help on rank <strong>for</strong> frac objects (polynomial fractions) etc.
How to define a polynomial matrix<br />
Basic objects of the <strong>Polynomial</strong> <strong>Toolbox</strong> are polynomial matrices. <strong>The</strong>y may be looked<br />
at in two different ways.<br />
You may directly enter a polynomial matrix by typing its entries. Upon initialization<br />
the <strong>Polynomial</strong> <strong>Toolbox</strong> defines several indeterminate variables <strong>for</strong> polynomials and<br />
polynomial matrices. One of them is s. Thus, you can simply type<br />
P = [ 1+s s^2<br />
2*s^3 1+2*s+s^2 ]<br />
to define the polynomial matrix<br />
2<br />
<br />
1<br />
s s<br />
Ps () 3 2<br />
2s12ss <strong>MATLAB</strong> returns<br />
P =<br />
1 + s s^2<br />
2s^3 1 + 2s + s^2<br />
<strong>The</strong> <strong>Polynomial</strong> <strong>Toolbox</strong> displays polynomial matrices by default in this style.<br />
We may also render the polynomial matrix P as<br />
P() s <br />
2s12ss 1<br />
<br />
0 0 1<br />
<br />
1 0 0 0<br />
s <br />
2 0 1<br />
s<br />
1 0<br />
<br />
2 0<br />
s<br />
0<br />
P<br />
0<br />
P<br />
1<br />
P<br />
2<br />
P<br />
3<br />
2<br />
1s s 2 3<br />
3 2<br />
<br />
<strong>Polynomial</strong> matrices may be defined this way by typing<br />
P0 = [ 1 0; 0 1 ];<br />
P1 = [ 1 0; 0 2 ];<br />
P2 = [ 0 1; 0 1 ];<br />
P3 = [ 0 0; 2 0 ];<br />
P = pol([P0 P1 P2 P3],3)<br />
<strong>MATLAB</strong> again returns first<br />
P =<br />
1 + s s^2<br />
2s^3 1 + 2s + s^2<br />
<strong>The</strong> display <strong>for</strong>mat may be changed to ―coefficient matrix style‖ by the command<br />
p<strong>for</strong>mat coef
Typing the name of the matrix<br />
P<br />
now results in<br />
<strong>Polynomial</strong> matrix in s: 2-by-2, degree: 3<br />
P =<br />
Matrix coefficient at s^0 :<br />
1 0<br />
0 1<br />
Matrix coefficient at s^1 :<br />
1 0<br />
0 2<br />
Matrix coefficient at s^2 :<br />
0 1<br />
0 1<br />
Matrix coefficient at s^3 :<br />
0 0<br />
2 0<br />
Simple operations with polynomial matrices<br />
Addition,<br />
subtraction and<br />
multiplication<br />
In the <strong>Polynomial</strong> <strong>Toolbox</strong> polynomial matrices are objects <strong>for</strong> which all standard<br />
operations are defined.<br />
Define the polynomial matrices<br />
P = [ 1+s 2; 3 4], Q = [ s^2 1-s; s^3 1]<br />
P =<br />
Q =<br />
1 + s 2<br />
3 4<br />
s^2 1-s<br />
s^3 1<br />
<strong>The</strong> sum and product of P and Q follow easily:<br />
S = P+Q<br />
S =<br />
R = P*Q<br />
1 + s + s^2 3 - s<br />
3 + s^3 5
Entrywise and<br />
matrix division<br />
Operations with<br />
fractions<br />
R =<br />
s^2 + 3s^3 3 - s^2<br />
3s^2 + 4s^3 7 - 3s<br />
Entrywise division of polynomial matrices P and Q yields a fraction<br />
T = P./Q<br />
results in<br />
T =<br />
1 + s 2<br />
----- -----<br />
s^2 1 - s<br />
3 4<br />
--- -<br />
s^3 1<br />
Right matrix division gets a fraction<br />
U = P/Q<br />
U =<br />
1<br />
PQ <br />
1 + s 2 / s^2 1 - s<br />
3 4 / s^3 1<br />
Left matrix division gets a fraction<br />
V = Q\P<br />
U =<br />
1<br />
Q P<br />
s^2 1 - s \ 1 + s 2<br />
s^3 1 \ 3 4<br />
<strong>Polynomial</strong> fractions may also enter arithmetic operations, possibly mixed with<br />
numbers or polynomials such as<br />
or<br />
2*U<br />
ans =<br />
2 + 2s 4 / s^2 1 - s<br />
6 8 / s^3 1<br />
P+T<br />
ans =
Concatenation<br />
and working<br />
with<br />
submatrices<br />
1 + s + s^2 + s^3 -4 + 2s<br />
----------------- -------<br />
s^2 -1 + s<br />
3 + 3s^3 8<br />
-------- -<br />
s^3 1<br />
In the realm of fractions, inverses are possible<br />
T.^-1<br />
ans =<br />
s^2 0.5 - 0.5s<br />
----- ----------<br />
1 + s 1<br />
0.33s^3 0.25<br />
------- ----<br />
1 1 U^-1<br />
U^-1<br />
ans =<br />
s^2 1 - s / 1 + s 2<br />
s^3 1 / 3 4<br />
All standard <strong>MATLAB</strong> operations to concatenate matrices or selecting submatrices<br />
also apply to polynomial matrices. Typing<br />
W = [P Q]<br />
results in<br />
W =<br />
1 + s 2 s^2 1 - s<br />
3 4 s^3 1<br />
<strong>The</strong> last row of W may be selected by typing<br />
w = W(2,:)<br />
w =<br />
3 4 s^3 1<br />
Submatrices may be assigned values by commands such as<br />
W(:,1:2) = eye(2)<br />
W =<br />
1 0 s^2 1 - s
Coefficients and<br />
coefficient<br />
matrices<br />
Conjugation<br />
and<br />
transposition<br />
0 1 s^3 1<br />
For fraction similarly<br />
T(1,:)<br />
ans =<br />
1 + s -2<br />
----- ------<br />
s^2 -1 + s<br />
It is easy to extract the coefficient matrices of a polynomial matrix. Given W, the<br />
coefficient matrix of s 2 may be retrieved as<br />
W{2}<br />
ans =<br />
0 0 1 0<br />
0 0 0 0<br />
<strong>The</strong> coefficients of the (1,3) entry of W follow as<br />
Given<br />
W{:}(1,3)<br />
ans =<br />
0 0 1 0<br />
W<br />
W =<br />
1 0 s^2 1 - s<br />
0 1 s^3 1<br />
the standard <strong>MATLAB</strong> conjugation operation results in<br />
W'<br />
ans =<br />
1 0<br />
0 1<br />
s^2 -s^3<br />
1 + s 1<br />
Transposition follows by typing<br />
W.'
ans =<br />
1 0<br />
0 1<br />
s^2 s^3<br />
1 - s 1<br />
<strong>The</strong> command transpose(W) is synonymous with W.'<br />
Advanced operations and functions<br />
<strong>The</strong> <strong>Polynomial</strong> <strong>Toolbox</strong> knows many advanced operations and functions. After<br />
defining<br />
P = [ 1+s 2; 3 4+s^2 ]<br />
try a few by typing <strong>for</strong> instance det(P), roots(P), or smith(P).<br />
<strong>The</strong> commands may be grouped in the categories listed in Table 1. A full list of all<br />
commands is available in the companion volume Commands. <strong>The</strong> same list appears<br />
after typing<br />
help polynomial<br />
in <strong>MATLAB</strong>.<br />
Table 1. Command categories<br />
Global structure Equation solvers<br />
<strong>Polynomial</strong>, Two-sided polynomial and<br />
Fraction objects<br />
Linear matrix inequality functions<br />
Special matrices Matrix pencil routines<br />
Convertors Numerical routines<br />
Overloaded operations Control routines<br />
Overloaded functions Robustness functions<br />
Basic functions (other than overloaded) 2-D functions<br />
Random functions Simulink
Advanced functions Visualization<br />
Canonical and reduced <strong>for</strong>ms Graphic user interface<br />
Sampling period functions Demonstrations and help
2 <strong>Polynomial</strong> matrices<br />
Introduction<br />
In this chapter we review in a tutorial style many of the functions and operations<br />
defined <strong>for</strong> polynomials and polynomial matrices. This exposition continues in<br />
Chapter 3 Discrete-time and two-sided polynomial matrices. Chapter 5 is devoted to<br />
next objects, <strong>Polynomial</strong> matrix fractions. Functions and operations <strong>for</strong> linear timeinvariant<br />
systems defined by polynomial matrix fractions are discussed in Chapter 6,<br />
LTI systems. Chapter 6, Control system design covers the applications of polynomial<br />
matrices in control system design.<br />
More detailed in<strong>for</strong>mation is available in Chapter 12, Objects, properties, <strong>for</strong>mats and<br />
in the manual pages included in the companion volume Commands.<br />
<strong>Polynomial</strong>s and polynomial matrices<br />
<strong>Polynomial</strong>s are mathematical expressions of the <strong>for</strong>m<br />
<br />
2<br />
n<br />
P s P0 Ps 1 P2 s Pn s<br />
where P0 , P1 , P2 , , P n are real or complex numbers, coefficients, and s is the<br />
indeterminate variable.<br />
Matrix polynomials are of the same <strong>for</strong>m but with P0 , P1 , P2 , , P n numerical matrices,<br />
coefficients matrices. Alternatively and usually, these objects can be viewed as<br />
polynomial matrices, i.e. matrices whose individual entries are polynomials. <strong>The</strong>se<br />
two concepts will be used interchangeably.<br />
For system and control engineers, s may be treated as the derivative operator d dt<br />
acting on continuous-time signals xt . It can be also considered as the Laplace<br />
trans<strong>for</strong>m complex variable.<br />
Entering polynomial matrices<br />
<strong>Polynomial</strong>s and polynomial matrices are most easily entered using one of the<br />
indeterminate variables s, p that are recognized by the <strong>Polynomial</strong> <strong>Toolbox</strong>, combined<br />
with the usual <strong>MATLAB</strong> conventions <strong>for</strong> entering matrices. Thus, typing<br />
P = [ 1+s 2*s^2<br />
2+s^3 4 ]
<strong>The</strong> pol<br />
command<br />
<strong>The</strong> <strong>Polynomial</strong><br />
Matrix Editor<br />
<strong>The</strong> default<br />
indeterminate<br />
variable<br />
defines the matrix<br />
2<br />
<br />
1s 2s<br />
Ps () 3 <br />
2s4 <br />
and returns<br />
P =<br />
1 + s 2s^2<br />
2 + s^3 4<br />
For other available indeterminate variables such as z, q, z^–1 or d see Chapter 3,<br />
Discrete-time and two-sided polynomial matrices.<br />
<strong>Polynomial</strong>s and polynomial matrices may also be entered in terms of their<br />
coefficients or coefficient matrices. For this purpose the pol command is available.<br />
Typing<br />
P0 = [1 2;3 4];<br />
P1 = [3 4;5 1];<br />
P2 = [1 0;0 1];<br />
P = pol([P0 P1 P2],2,'s')<br />
<strong>for</strong> instance, defines the polynomial matrix<br />
P =<br />
according to<br />
1 + 3s + s^2 2 + 4s<br />
3 + 5s 4 + s + s^2<br />
P( s) P0 P1s P2 s<br />
2<br />
More complicated polynomial matrices may be entered and edited with the help of the<br />
<strong>Polynomial</strong> Matrix Editor (see Chapter 4).<br />
Note that if any of the default indeterminates is redefined as a variable then it is no<br />
longer available as an indeterminate. Typing<br />
s = 1;<br />
P = 1+s^2<br />
results in
Changing the<br />
default<br />
indeterminate<br />
variable<br />
P =<br />
2<br />
To free the variable simply type<br />
clear s;<br />
P = 1+s^2<br />
P =<br />
1 + s^2<br />
After the <strong>Polynomial</strong> <strong>Toolbox</strong> has been started up the default indeterminate variable<br />
is s. This implies, among other things, that the command<br />
P = pol([P0 P1 P2],2)<br />
returns a polynomial matrix<br />
P =<br />
1 + 3s + s^2 2 + 4s<br />
3 + 5s 4 + s + s^2<br />
in the indeterminate s. <strong>The</strong> indeterminate variable may be changed with the help of<br />
the gprops command: typing<br />
gprops p; pol([P0 P1 P2],2)<br />
<strong>for</strong> instance, results in<br />
ans =<br />
1 + 3p + p^2 2 + 4p<br />
3 + 5p 4 + p + p^2<br />
To enter data independently on the current default indeterminate variable, one can<br />
use a shortcut v: <strong>The</strong> indeterminate variable v is automatically replaced by the<br />
current default indeterminate variable. Thus, after the default indeterminate<br />
variable has been set to p by the command<br />
then<br />
gprops p<br />
V = 1+v^2+3*v^3<br />
returns<br />
V =<br />
1 + p^2 + 3p^3
Basic manipulations with polynomial matrices<br />
Concatenation<br />
and working<br />
with<br />
submatrices<br />
Coefficients<br />
Standard <strong>MATLAB</strong> conventions may be used to concatenate polynomial and standard<br />
matrices:<br />
P = 1+s^2;<br />
Q = [P 1+s 3]<br />
results in<br />
Q =<br />
1 + s^2 1 + s 3<br />
Submatrices may be selected such as in<br />
Q(1,2:3)<br />
ans =<br />
1 + s 3<br />
Submatrices may also be changed; the command<br />
yields<br />
Q(1,2:3) = [1-s 4]<br />
Q =<br />
1 + s^2 1 - s 4<br />
All usual <strong>MATLAB</strong> subscripting and colon notations are available. Also the commands<br />
like sum, prod, repmat, resape, tril, triu, diag and trace work as<br />
can be expected.<br />
It is easy to extract the coefficients of a polynomial. Given<br />
R=2+3*s+4*s^2+5*s^3;<br />
the coefficient matrix of s 2 may be retrieved as<br />
R{2}<br />
ans =<br />
4<br />
<strong>The</strong> coefficient can also be changed<br />
yields<br />
R{2}=6
Degrees and<br />
leading<br />
coefficients<br />
R =<br />
2 + 3s + 6s^2 + 5s^3<br />
In this way, new coefficients may be inserted. Thus<br />
yields<br />
R{5}=7<br />
R =<br />
2 + 3s + 6s^2 + 5s^3 + 7s^5<br />
For polynomial matrices, it works similarly. Given<br />
T = [ 1+s 2 s^2 s<br />
3 4 s^3 0 ];<br />
the coefficient matrix of s 2 may be retrieved as<br />
T{2}<br />
ans =<br />
0 0 1 0<br />
0 0 0 0<br />
Taking coefficients and taking submatrices may be combined. So the coefficients of<br />
the (1,3) entry of T follow as<br />
T{:}(1,3)<br />
ans =<br />
0 0 1 0<br />
<strong>The</strong> degree of a polynomial matrix is available by<br />
deg(T)<br />
ans =<br />
3<br />
<strong>The</strong> leading coefficient matrix, in our example that of<br />
lcoef(T)<br />
ans =<br />
0 0 0 0<br />
0 0 1 0<br />
3<br />
s may be obtained as
Constant, zero<br />
and empty<br />
matrices<br />
Values<br />
Derivative and<br />
integral<br />
A special case of polynomial matrix is that of degree 0; in effect, it is just a standard<br />
<strong>MATLAB</strong> matrix. <strong>The</strong> <strong>Polynomial</strong> <strong>Toolbox</strong> treats both these <strong>for</strong>ms of data<br />
interchangeably: when a polynomial is expected, the standard matrix is also<br />
accepted. For explicit conversions, pol and double commands are available:<br />
P = pol([1 2;3 4]);<br />
D = double(P);<br />
Another special case is the zero polynomial; its degree is –Inf . Still another case:<br />
the empty polynomial matrix, similar to empty standard matrix. Its degree is empty.<br />
A polynomial may also be considered as a function of real or complex variable. For<br />
evaluating the polynomial function, the value command is available:<br />
F=2+3*s+4*s^2; X=2;<br />
Y=value(F,X)<br />
Y =<br />
24<br />
When F or X is a matrix then the values are evaluated entrywise:<br />
X=[2 3];<br />
Y=value(F,X)<br />
Y =<br />
24 47<br />
A bit different command is mvalue. For scalar polynomial F and square matrix X, it<br />
computes the matrix value Y according to the matrix algebra rules:<br />
X=[1 2;0 1];<br />
Y=mvalue(F,X)<br />
Y =<br />
9 22<br />
0 9<br />
For matrix polynomial function Fs , the derivative and the integral are computed<br />
deriv(F)<br />
ans =
3 + 8s<br />
integral(F)<br />
ans =<br />
2s + 1.5s^2 + 1.3s^3<br />
Arithmetic operations on polynomial matrices<br />
Addition,<br />
subtraction and<br />
multiplication<br />
In operations with two or more polynomial matrices, the indeterminates should be<br />
the same. If they are not (usually by a mistake), a warning is issued and the<br />
indeterminate is changed to the default.<br />
Define the polynomial matrices<br />
P = [1+s 2; 3 4], Q = [s^2 s; s^3 0]<br />
P =<br />
Q =<br />
1 + s 2<br />
3 4<br />
s^2 s<br />
s^3 0<br />
<strong>The</strong> sum and product of P and Q follow easily:<br />
S = P+Q<br />
S =<br />
R = P*Q<br />
R =<br />
<strong>The</strong> command<br />
R+3<br />
ans =<br />
1 + s + s^2 2 + s<br />
3 + s^3 4<br />
s^2 + 3s^3 s + s^2<br />
3s^2 + 4s^3 3s<br />
3 + s^2 + 3s^3 3 + s + s^2
Determinants,<br />
unimodularity<br />
and adjoints<br />
3 + 3s^2 + 4s^3 3 + 3s<br />
obviously is interpreted as the instruction to add three times the unit matrix to R.<br />
<strong>The</strong> command<br />
3*R<br />
ans =<br />
3s^2 + 9s^3 3s + 3s^2<br />
9s^2 + 12s^3 9s<br />
yields the expected result. Also expected can be results of entrywise multiplication<br />
R.*S and of powers R^N or R.^N with nonnegative integer N. For negative N, see<br />
Chapter 4, <strong>Polynomial</strong> matrix fractions.<br />
<strong>The</strong> determinant of a square polynomial matrix is defined exactly as its constant<br />
matrix counterpart. In fact, its computation is not much more difficult:<br />
P = [1 s s^2; 1+s s 1-s; 0 -1 -s]<br />
P =<br />
det(P)<br />
ans =<br />
1 s s^2<br />
1 + s s 1 - s<br />
0 -1 -s<br />
1 - s - s^2<br />
If its determinant happens to be constant then the polynomial matrix is called<br />
unimodular:<br />
U=[2-s-2*s^2, 2-2*s^2, 1+s; 1-s-s^2, 1-s^2, s;-1-s, -s, 1]<br />
U =<br />
det(U)<br />
2 - s - 2s^2 2 - 2s^2 1 + s<br />
1 - s - s^2 1 - s^2 s<br />
-1 - s -s 1<br />
Constant polynomial matrix: 1-by-1<br />
ans =<br />
1<br />
If a matrix is suspected of unimodularity then one can make it sure by a special<br />
tester
isunimod(U)<br />
ans =<br />
1<br />
Also the adjoint matrix is defined as <strong>for</strong> constant matrices. <strong>The</strong> adjoint is a<br />
polynomial matrix and may be computed by typing<br />
adj(P)<br />
ans =<br />
1 - s - s^2 0 s - s^2 - s^3<br />
s + s^2 -s -1 + s + s^2 + s^3<br />
-1 - s 1 -s^2<br />
Quite to the contrary, the inverse of a square polynomial matrix is usually not a<br />
polynomial but a polynomial fraction<br />
inv(P)<br />
ans =<br />
-1 + s + s^2 0 -s + s^2 + s^3<br />
-s - s^2 s 1 - s - s^2 - s^3<br />
1 + s -1 s^2<br />
----------------------------------------<br />
-1 + s + s^2<br />
For more details, see Chapter 4, <strong>Polynomial</strong> matrix fractions.<br />
<strong>The</strong> only cases do have polynomial inverse are unimodular matrices. In our example<br />
inv(U)<br />
ans =<br />
Indeed,<br />
U*inv(U)<br />
ans =<br />
1 -2 - s + s^2 -1 + s + s^2 - s^3<br />
-1 3 + s - s^2 1 - 2s - s^2 + s^3<br />
1 -2 + s^2 s - s^3<br />
1.0000 0 0<br />
0 1.0000 0<br />
0 0 1.0000
Rank<br />
If the matrix is nonsquare but has full rank then a usual partial replacement <strong>for</strong> the<br />
inverse is provided by the generalized Moore-Penrose pseudoinverse, which is<br />
computed by the pinv function.<br />
Q = P(:,1:2)<br />
Q =<br />
1 s<br />
1 + s s<br />
0 -1<br />
Qpinv = pinv(Q)<br />
Qpinv =<br />
1 - s^3 1 + s + s^3 2s + s^2<br />
s^2 + s^3 -s^2 -2 - 2s - s^2<br />
------------------------------------------<br />
2 + 2s + s^2 + s^4<br />
In general, the generalized Moore-Penrose pseudoinverse is a polynomial matrix<br />
fraction. Once again,<br />
Qpinv*Q<br />
ans =<br />
1.0000 0<br />
0 1.0000<br />
A polynomial matrix P( s)<br />
has full column rank (or full normal column rank) if it has<br />
full column rank everywhere in the complex plane except at a finite number of points.<br />
Similar definitions hold <strong>for</strong> full row rank and full rank.<br />
Recall that<br />
P = [1 s s^2; 1+s s 1-s; 0 -1 -s]<br />
P =<br />
<strong>The</strong> rank test<br />
1 s s^2<br />
1 + s s 1 - s<br />
0 -1 -s<br />
isfullrank(P)
Bases and null<br />
spaces<br />
ans =<br />
1<br />
confirms that P has full rank.<br />
<strong>The</strong> normal rank of a polynomial matrix P( s)<br />
equals<br />
max sC P( s)<br />
rank<br />
Similar definitions apply to the notions of normal column rank and normal row rank.<br />
<strong>The</strong> rank is calculated by<br />
rank(P)<br />
ans =<br />
3<br />
As <strong>for</strong> constant matrices, rank evaluation may be quite sensitive and an ad hoc<br />
change of tolerance (which may be included as an optional input parameter) may be<br />
helpful <strong>for</strong> difficult examples.<br />
A polynomial matrix is nonsingular if it has full normal rank.<br />
issingular(P)<br />
ans =<br />
0<br />
<strong>The</strong>re are two important subspaces (more precisely, submodules) associated with a<br />
polynomial matrix A( s):<br />
its null space and its range (or span). <strong>The</strong> (right) null space<br />
is defined as the set of all polynomial vectors x( s)<br />
such that A( s) x( s)<br />
0 . For matrix<br />
A = P(1:2,:)<br />
A =<br />
1 s s^2<br />
1 + s s 1 - s<br />
the (right) nullspace is computed by<br />
N = null(A)<br />
N =<br />
0.35s - 0.35s^2 - 0.35s^3<br />
-0.35 + 0.35s + 0.35s^2 + 0.35s^3<br />
-0.35s^2<br />
Here the null space dimension is 1 and its basis has degree 3.
Roots and<br />
stability<br />
<strong>The</strong> range of A( s)<br />
is the set of all polynomial vectors y( s)<br />
such that y( s) A( s) x( s)<br />
<strong>for</strong><br />
some polynomial vector x( s)<br />
. In the <strong>Polynomial</strong> <strong>Toolbox</strong>, the minimal basis of the<br />
range is returned by the command<br />
minbasis(A)<br />
>> minbasis(A)<br />
ans =<br />
0 1.0000<br />
1.0000 0<br />
<strong>The</strong> roots or zeros of a polynomial matrix P( s)<br />
are those points si in the complex<br />
plane where P( s)<br />
loses rank.<br />
roots(P)<br />
ans =<br />
-1.6180<br />
0.6180<br />
<strong>The</strong> roots can be both finite and infinite. <strong>The</strong> infinite roots are normally suppressed.<br />
To reveal them, type<br />
roots(P,'all')<br />
ans =<br />
-1.6180<br />
0.6180<br />
Inf<br />
Unimodular matrices have no finite roots:<br />
roots(U)<br />
ans =<br />
Empty matrix: 0-by-1<br />
but typically have infinite roots:<br />
roots(U,'all')<br />
ans =<br />
Inf<br />
Inf<br />
Inf
If P( s)<br />
is square then its roots are the roots of its determinantdet P( s ) , including<br />
multiplicity:<br />
roots(det(P))<br />
ans =<br />
-1.6180<br />
0.6180<br />
<strong>The</strong> finite roots may be visualized as in Fig. 1 by typing<br />
zpplot(P)<br />
Fig. 1. Locations of the roots of a polynomial matrix<br />
A polynomial matrix is stable if all its roots fall within a relevant stability region. For<br />
polynomial matrices in s and p it is the open left half plane <strong>for</strong> polynomial matrices,<br />
which means Hurwitz stability used <strong>for</strong> continuous-time systems<br />
<strong>The</strong> macro isstable checks stability:<br />
isstable(s-2)<br />
ans =<br />
0<br />
isstable(s+2)
Special<br />
constant<br />
matrices related<br />
to polynomials<br />
ans =<br />
0<br />
For other stability regions considered in the <strong>Polynomial</strong> <strong>Toolbox</strong>, see Chapter 3,<br />
Discrete-time and two-sided polynomial matrices.<br />
<strong>The</strong>re are several interesting constant matrices that are composed from the<br />
coefficients of polynomials (or the matrix coefficients of polynomial matrices) and are<br />
frequently encountered in mathematical and engineering textbooks. Given a<br />
polynomial<br />
p( v) p p v p v p v<br />
0 1 2 2 <br />
d n<br />
of degree n we may <strong>for</strong> instance define the corresponding n n Hurwitz matrix<br />
H<br />
p<br />
<br />
L<br />
M<br />
NM<br />
p p p<br />
n 1 n 3 n 5<br />
p p p<br />
0<br />
0<br />
n n 2 n 4<br />
p p p<br />
n 1 n 3 n 5<br />
p p p<br />
0 0 0<br />
n n 2 n 4<br />
0 0 0 0<br />
<br />
<br />
a k ( n k)<br />
Sylvester matrix (<strong>for</strong> some k 1)<br />
S<br />
p<br />
<br />
L<br />
M<br />
N<br />
M<br />
p<br />
n<br />
<br />
<br />
<br />
<br />
p p p 0 0<br />
0 1<br />
n<br />
0 p p p 0 0<br />
0 1<br />
<br />
0 0 p p p<br />
or an n n companion matrix<br />
C( p)<br />
<br />
L<br />
M<br />
NM<br />
0 1 0<br />
n<br />
0 0 1 0<br />
0 1<br />
<br />
<br />
<br />
0 0 1<br />
p0<br />
p1<br />
pn<br />
1<br />
<br />
pn<br />
pn<br />
pn<br />
Using the <strong>Polynomial</strong> <strong>Toolbox</strong>, we take<br />
p = pol([-1 1 2 3 4 5],5)<br />
p =<br />
-1 + s + 2s^2 + 3s^3 + 4s^4 + 5s^5<br />
O<br />
P<br />
QP<br />
.<br />
p<br />
0<br />
n<br />
O<br />
P<br />
QP<br />
,<br />
O<br />
P<br />
Q<br />
P
and simply type<br />
or<br />
or<br />
hurwitz(p)<br />
ans =<br />
sylv(p,3)<br />
ans =<br />
4 2 -1 0 0<br />
5 3 1 0 0<br />
0 4 2 -1 0<br />
0 5 3 1 0<br />
0 0 4 2 -1<br />
-1 1 2 3 4 5 0 0 0<br />
compan(p)<br />
ans =<br />
Divisors and multiples<br />
Scalar divisor<br />
and multiple<br />
0 -1 1 2 3 4 5 0 0<br />
0 0 -1 1 2 3 4 5 0<br />
0 0 0 -1 1 2 3 4 5<br />
0 1.0000 0 0 0<br />
0 0 1.0000 0 0<br />
0 0 0 1.0000 0<br />
0 0 0 0 1.0000<br />
0.2000 -0.2000 -0.4000 -0.6000 -0.8000<br />
For polynomial matrices the block matrix versions are defined and computed in a<br />
fairly obvious manner<br />
To understand when division of polynomial matrices is possible, begin with scalar<br />
polynomials. Consider three polynomialsa( s),<br />
b( s)<br />
and c( s)<br />
such thata( s) b( s) c( s)<br />
.
Division<br />
Division with<br />
remainder<br />
Greatest<br />
common divisor<br />
Coprimeness<br />
We say that b( s)<br />
is a divisor (or factor) of a( s)<br />
or a( s)<br />
is a multiple ofb( s),<br />
and write<br />
a( s) b( s).<br />
This is sometimes also stated as b( s)<br />
dividesa( s).<br />
For example, take<br />
b = 1-s; c = 1+s; a = b*c<br />
a =<br />
1 - s^2<br />
As b( s)<br />
is a divisor of a( s),<br />
the division<br />
a/b<br />
can be done and results in<br />
ans =<br />
1 + s<br />
If b( s)<br />
is not a divisor of a( s)<br />
then the result of division is not a polynomial but a<br />
polynomial fraction1 , see Chapter 4, <strong>Polynomial</strong> matrix fractions.<br />
On the other hand, any n( s)<br />
can be divided by any nonzero d( s)<br />
using another<br />
operation called division with a remainder. This division results in a quotient q( s)<br />
and<br />
a remainder r( s):<br />
n( s) q( s) d( s) r( s)<br />
Typically, it is required that deg r( s) deg d( s)<br />
, which makes the result unique. Thus,<br />
dividing<br />
by<br />
yields<br />
n = (1-s)^2;<br />
d = 1+s;<br />
[q,r] = ldiv(n,d)<br />
q =<br />
-3 + s<br />
Constant polynomial matrix: 1-by-1<br />
r =<br />
4<br />
Division with remainder is sometimes called Euclidean division.<br />
If a polynomial g( s)<br />
divides both a( s)<br />
and b( s)<br />
then g( s)<br />
is called a common divisor of<br />
a( s)<br />
and b( s).<br />
If, furthermore, g( s)<br />
is a multiple of every common divisor of a( s)<br />
and<br />
b( s)<br />
then g( s)<br />
is a greatest common divisor of a( s)<br />
and b( s).<br />
1 Users familiar with older versions of the <strong>Polynomial</strong> <strong>Toolbox</strong> (2.5 and below) should<br />
notice that the per<strong>for</strong>mance of slash and backward slash operators has been changed:<br />
the operators are now primarily used to create fractions and not only to extract<br />
factors as be<strong>for</strong>e. For deeper understanding, experiment or see Chapter 4, <strong>Polynomial</strong><br />
matrix fractions and Chapter 13, Compatibility with Version 2. To extract a factor<br />
directly, one should better use equation solvers as axb and alike.<br />
31
Least common<br />
multiple<br />
If the only common divisors of a( s)<br />
and b( s)<br />
are constants then the polynomials a( s)<br />
and b( s)<br />
are coprime (or relatively prime). To compute a greatest common divisor of<br />
and<br />
type<br />
a = s+s^2;<br />
b = s-s^2;<br />
gld(a,b)<br />
ans =<br />
s<br />
Similarly, the polynomials<br />
c = 1+s; d = 1-s;<br />
are coprime as<br />
gld(c,d)<br />
is a constant<br />
ans =<br />
1<br />
Coprimeness may also be tested directly<br />
isprime([c,d])<br />
ans =<br />
1<br />
As the two polynomials c and d are coprime, there exist other two polynomials e and f<br />
that satisfy the linear polynomial equation<br />
ce df 1<br />
<strong>The</strong> polynomials e and f may be computed according to<br />
[e,f] = axbyc(c,d,1)<br />
e =<br />
f =<br />
0.5000<br />
0.5000<br />
We check the result by typing<br />
c*e+d*f<br />
ans =<br />
1<br />
If a polynomial m ( s)<br />
is a multiple of both a( s)<br />
and b( s)<br />
then m ( s)<br />
is called a common<br />
multiple of a( s)<br />
and b( s).<br />
If, furthermore, m ( s)<br />
is a divisor of every common multiple<br />
of a( s)<br />
and b( s)<br />
then it is a least common multiple of a( s)<br />
and b( s):<br />
m = llm(a,b)<br />
m =<br />
s - s^3<br />
32
Matrix divisors<br />
and multiples<br />
Matrix division<br />
Matrix division<br />
with remainder<br />
<strong>The</strong> concepts just mentioned are combined in the well-known fact that the product of<br />
a greatest common divisor and a least common multiple equals the product of the two<br />
original polynomials:<br />
isequal(a*b,gld(a,b)*llm(a,b))<br />
ans =<br />
1<br />
Next consider polynomial matrices A( s),<br />
B( s)<br />
and C( s)<br />
of compatible sizes such that<br />
A( s) B( s) C( s)<br />
. We say that B( s)<br />
is a left divisor of A( s),<br />
or A( s)<br />
is a right multiple of<br />
B( s)<br />
. Take <strong>for</strong> instance<br />
B = [1 s; 1+s 1-s], C = [2*s 1; 0 1], A = B*C<br />
B =<br />
C =<br />
A =<br />
1 s<br />
1 + s 1 - s<br />
2s 1<br />
0 1<br />
2s 1 + s<br />
2s + 2s^2 2<br />
As B( s)<br />
is a left divisor of A( s),<br />
the matrix left division<br />
B\A<br />
can be done and results in a polynomial matrix<br />
ans =<br />
2s 1<br />
0 1<br />
If B( s)<br />
is not a left divisor then the result of left division is not a polynomial matrix<br />
but a polynomial matrix fraction, see Chapter 4, <strong>Polynomial</strong> matrix fractions. This is<br />
also the case when B( s)<br />
is a divisor but from the other ("wrong") side: A/B.<br />
A/B<br />
ans =<br />
1 + 3s 2s / 1 + s 1<br />
2 + 2s + 2s^2 2s + 2s^2 / 2 1 + s<br />
On the other hand, a polynomial matrix N ( s)<br />
can be divided by any compatible<br />
nonsingular square matrix D( s)<br />
with a remainder, resulting in a matrix quotient<br />
Q( s)<br />
and a matrix remainder R( s)<br />
such that<br />
N ( s) D( s) Q( s) R( s)<br />
1<br />
If it is required that the rational matrix D ( s) R( s)<br />
is strictly proper then the<br />
division is unique. Thus, dividing a random matrix<br />
N = prand(4,2,3,'ent','int')<br />
N =<br />
33
Greatest<br />
common left<br />
divisor<br />
0 -3 + 11s -1 + s<br />
5 - 4s^3 1 - 7s + 4s^2 + 8s^3 - 3s^4 4 + 6s<br />
from the left by a random square matrix<br />
D = prand(3,2,'ent','int')<br />
D =<br />
results in<br />
-7 3 - 2s + 3s^2<br />
4 + 4s + 6s^2 0<br />
[Q,R] = ldiv(N,D)<br />
Q =<br />
R =<br />
Indeed,<br />
while<br />
0.44 - 0.67s -0.11 + 1.7s - 0.5s^2 0<br />
0 -1.2 0<br />
3.1 - 4.7s -0.28 + 20s -1 + s<br />
3.2 + 0.89s 1.4 - 13s 4 + 6s<br />
deg(det(D))<br />
ans =<br />
4<br />
deg(adj(D)*R,'ent')<br />
ans =<br />
3 3 3<br />
3 3 3<br />
1<br />
so that each entry of the rational matrix D ( s) R( s) (adj D) R / det D is strictly<br />
proper.<br />
If a polynomial matrix G( s)<br />
is a left divisor of both A( s)<br />
and B( s)<br />
then G( s)<br />
is called<br />
a common left divisor of A( s)<br />
and B( s)<br />
. If, furthermore, G( s)<br />
is a right multiple of<br />
every common left divisor of A( s)<br />
and B( s)<br />
then it is a greatest common left divisor of<br />
A( s)<br />
and B( s)<br />
. If the only common left divisors of A( s)<br />
and B( s)<br />
are unimodular<br />
matrices then the polynomial matrices A( s)<br />
and B( s)<br />
are left coprime. To compute a<br />
greatest common left divisor of<br />
and<br />
A = [1+s-s^2, 2*s + s^2; 1-s^2, 1+2*s+s^2]<br />
A =<br />
1 + s - s^2 2s + s^2<br />
1 - s^2 1 + 2s + s^2<br />
B = [2*s, 1+s^2; 1+s, s+s^2]<br />
B =<br />
2s 1 + s^2<br />
34
Least common<br />
right multiple<br />
type<br />
gld(A,B)<br />
ans =<br />
1 + s s + s^2<br />
0 1<br />
1 + s 0<br />
Similarly, the polynomial matrices<br />
C = [1+s, 1; 1-s 0], D = [1-s, 1; 1 1]<br />
C =<br />
D =<br />
1 + s 1<br />
1 - s 0<br />
1 - s 1<br />
1 1<br />
are left coprime as<br />
gld(D,C)<br />
ans =<br />
1 0<br />
0 1<br />
is obviously unimodular. You may also directly check<br />
isprime([C,D])<br />
ans =<br />
1<br />
As the two polynomial matrices C and D are left coprime there exist other two other<br />
polynomial matrices E and F that satisfy<br />
CE DF I<br />
<strong>The</strong>y may be computed according to<br />
[E,F] = axbyc(C,D,eye(2))<br />
E =<br />
F =<br />
0 0<br />
1.0000 -1.0000<br />
0 0<br />
0 1.0000<br />
If a polynomial matrix M ( s)<br />
is a right multiple of both A( s)<br />
and B( s)<br />
then M ( s)<br />
is<br />
called a common right multiple of A( s)<br />
and B( s)<br />
. If, furthermore, M ( s)<br />
is a left<br />
divisor of every common right multiple of A( s)<br />
and B( s)<br />
then M ( s)<br />
is a least common<br />
right multiple of A( s)<br />
and B( s)<br />
.<br />
35
Dual concepts<br />
M = lrm(A,B)<br />
M =<br />
0.32 + 1.3s + 0.95s^2 -0.3 - 1.4s - 0.3s^2 + 0.15s^3<br />
0.63 + 1.3s + 0.63s^2 -0.75 - 1.1s - 0.15s^2 + 0.15s^3<br />
which is verified by<br />
A\M, B\M<br />
ans =<br />
ans =<br />
0.32 + 0.32s -0.3 - 0.15s<br />
0.32 + 0.32s -0.45<br />
0.63 + 0.32s -0.75<br />
0.32 -0.3 + 0.15s-3<br />
<strong>The</strong> dual concepts of right divisors, left multiples, common right divisors, greatest<br />
common right divisors, common left multiples, and least common left multiples are<br />
similarly defined and computed by dual functions grd and llm.<br />
Transposition and conjugation<br />
Transposition<br />
Complex<br />
coefficients<br />
Given<br />
T = [1 0 s^2 s; 0 1 s^3 0]<br />
T =<br />
1 0 s^2 s<br />
0 1 s^3 0<br />
the transposition follows by typing<br />
T.'<br />
ans =<br />
1 0<br />
0 1<br />
s^2 s^3<br />
s 0<br />
<strong>The</strong> command transpose(T)is synonymous with T.'<br />
<strong>The</strong> <strong>Polynomial</strong> <strong>Toolbox</strong> supports polynomial matrices with complex coefficients such<br />
as<br />
C = [1+s 1; 2 s]+i*[s 1; 1 s]<br />
C =<br />
1+0i + (1+1i)s 1+1i<br />
2+1i (1+1i)s<br />
36
Conjugated<br />
transposition<br />
Many <strong>Toolbox</strong> operations and fractions handle the accordingly. For example the<br />
transposition<br />
C.'<br />
ans =<br />
1+0i + (1+1i)s 2+1i<br />
1+1i (1+1i)s<br />
In addition, there are several special functions <strong>for</strong> complex coefficient polynomial<br />
matrices such as<br />
real(C)<br />
ans =<br />
1 + s 1<br />
2 s<br />
imag(C)<br />
ans =<br />
conj(C)<br />
ans =<br />
s 1<br />
1 s<br />
1+0i + (1-1i)s 1-1i<br />
2-1i (1-1i)s)<br />
<strong>The</strong> operation of conjugated transposition is a bit more complicated. It is connected<br />
with the scalar products xt yt of continuous-time systems. Here the theory<br />
requires s s.<br />
For matrix polynomials, the operation consists of the conjugation,<br />
the transposition and the change of the argument s to –s. So,<br />
C'<br />
ans =<br />
1+0i + (-1+1i)s 2-1i<br />
1-1i (-1+1i)s<br />
<strong>The</strong> command ctransp(C) is synonymous with C' .<br />
For real polynomial matrix <br />
T<br />
sign of s hence T s<br />
T = [1 0 s^2 s; 0 1 s^3 0]<br />
T =<br />
T'<br />
ans =<br />
1 0 s^2 s<br />
0 1 s^3 0<br />
1 0<br />
T s , this operation makes transposition with change of<br />
37
0 1<br />
s^2 -s^3<br />
-s 0<br />
Reduced and canonical <strong>for</strong>ms<br />
Row degrees<br />
Suppose that we have a polynomial matrix<br />
P = [1+s^2, -2; s-1 1]<br />
P =<br />
of degree<br />
deg(P)<br />
ans =<br />
1 + s^2 -2<br />
-1 + s 1<br />
2<br />
with leading coefficient matrix<br />
Pl = P{2}<br />
Pl =<br />
1 0<br />
0 0<br />
Besides the (overall) degree of P we may also consider its row degrees<br />
deg(P,'row')<br />
ans =<br />
2<br />
1<br />
and column degrees<br />
deg(P,'col')<br />
ans =<br />
2 0<br />
Associated with the row degrees is the leading row coefficient matrix<br />
lcoef(P,'row')<br />
ans =<br />
1 0<br />
1 0<br />
and associated with the column degrees is the leading column coefficient matrix<br />
lcoef(P,'col')<br />
ans =<br />
1 -2<br />
38
Row and<br />
column reduced<br />
matrices<br />
Row reduced<br />
<strong>for</strong>m<br />
Triangular and<br />
staircase <strong>for</strong>m<br />
0 1<br />
A polynomial matrix is row reduced if its leading row coefficient matrix has full row<br />
rank. Similarly, it is column reduced if its leading column coefficient matrix has full<br />
column rank. <strong>The</strong> matrix P is definitely not row reduced<br />
isfullrank(lcoef(P,'row'))<br />
ans =<br />
0<br />
but it is column reduced<br />
isfullrank(lcoef(P,'col'))<br />
ans =<br />
1<br />
Any polynomial matrix with full row rank may be trans<strong>for</strong>med into row reduced <strong>for</strong>m<br />
by pre-multiplying it by a suitable unimodular matrix. To compute a row reduced<br />
<strong>for</strong>m of P, call<br />
P_row_reduced = rowred(P)<br />
P_row_reduced =<br />
1 + s -2 - s<br />
-1 + s 1<br />
Indeed, the row rank of<br />
lcoef(P_row_reduced,'row')<br />
ans =<br />
is full.<br />
1 -1<br />
1 0<br />
<strong>The</strong>re are several special <strong>for</strong>ms of a polynomial matrix that can be achieved by pre-<br />
and/or post-multiplying it by suitable unimodular matrix. <strong>The</strong>se operations preserve<br />
many important properties and indeed serve to make these visible.<br />
Thus, a lower-left triangular <strong>for</strong>m T ( s)<br />
of A( s)<br />
resulting from column operations<br />
T ( s) A( s) U( s)<br />
can be computed by the macro tri:<br />
A = [s^2 0 1; 0 s^2 1+s]<br />
A =<br />
T = tri(A)<br />
T =<br />
s^2 0 1<br />
0 s^2 1 + s<br />
-1 0 0<br />
-1 - s -1.2s^2 0<br />
<strong>The</strong> corresponding unimodular matrix is returned by<br />
[T,U] = tri(A); U<br />
U =<br />
39
Another<br />
triangular <strong>for</strong>m<br />
Hermite <strong>for</strong>m<br />
0 0.29 0.5<br />
0 -0.87 + 0.29s 0.5 + 0.5s<br />
-1 -0.29s^2 -0.5s^2<br />
If A( s)<br />
has not full row rank then T ( s)<br />
is in staircase <strong>for</strong>m. Similarly, an upper-right<br />
triangular (row staircase) <strong>for</strong>m is achieved by row (unimodular) operations. It results<br />
from the call tri(A,'row').<br />
If B( s)<br />
is a square polynomial matrix with nonsingular constant term then another<br />
upper-triangular <strong>for</strong>m may be obtained by the overloaded macro lu:<br />
B = [ 1 1 s; s+1 0 s; 1-s s 2+s]<br />
B =<br />
1 1 s<br />
1 + s 0 s<br />
1 - s s 2 + s<br />
[V,T] = lu(B)<br />
V =<br />
T =<br />
1 0.33s 0.11s<br />
1 + s 1 + 1.3s + 0.33s^2 0.44s + 0.11s^2<br />
1 - s 1 - 1.7s - 0.33s^2 1 - 0.56s - 0.11s^2<br />
1 1 + 0.33s 0.78s + 0.11s^3<br />
0 -1 -0.67s - s^2 + 0.33s^3<br />
0 0 2 + 2s + 2s^2 - s^3<br />
<strong>The</strong> triangular <strong>for</strong>ms described above are by no means unique. A canonical triangular<br />
<strong>for</strong>m is called the Hermite <strong>for</strong>m. An n m polynomial matrix A( s)<br />
of rank r is in<br />
column Hermite <strong>for</strong>m if it has the following properties:<br />
it is lower triangular<br />
the diagonal entries are all monic<br />
each diagonal entry has higher degree than any entry on its left<br />
in particular, if the diagonal element is constant then all off-diagonal elements in<br />
the same row are zero<br />
if n r then the last n r columns are zero<br />
<strong>The</strong> nomenclature in the literature is not consistent. Some authors (in particular<br />
Kailath, 1980) refer to this as the row Hermite <strong>for</strong>m. <strong>The</strong> polynomial matrix A is in<br />
row Hermite <strong>for</strong>m if it is the transpose of a matrix in column Hermite <strong>for</strong>m. <strong>The</strong><br />
command<br />
H = hermite(A)<br />
returns the column Hermite <strong>for</strong>m<br />
H =<br />
1 0 0<br />
1 + s s^2 0<br />
40
Echelon <strong>for</strong>m<br />
Smith <strong>for</strong>m<br />
while the call<br />
[H,U] = hermite(A); U<br />
U =<br />
0 -0.25 0.5<br />
0 0.75 - 0.25s 0.5 + 0.5s<br />
1 0.25s^2 -0.5s^2<br />
provides a unimodular reduction matrix U such that H ( s) A( s) U( s)<br />
.<br />
Yet another canonical <strong>for</strong>m is called the (Popov) echelon <strong>for</strong>m. A polynomial matrix E<br />
is in column echelon <strong>for</strong>m (or Popov <strong>for</strong>m) if it has the following properties:<br />
it is column reduced with its column degrees arranged in ascending order<br />
<strong>for</strong> each column there is a so-called pivot index i such that the degree of the i -th<br />
entry in this column equals the column degree, and the i-th entry is the lowest<br />
entry in this column with this degree<br />
the pivot indexes are arranged to be increasing<br />
each pivot entry is monic and has the highest degree in its row<br />
A square matrix in column echelon <strong>for</strong>m is both column and row reduced<br />
Given a square and column-reduced polynomial matrix D( s)<br />
the command [E,U] =<br />
echelon(D) computes the column echelon <strong>for</strong>m E( s)<br />
of D( s).<br />
<strong>The</strong> unimodular<br />
matrix U ( s)<br />
satisfies E( s) D( s) U( s)<br />
.<br />
By way of example, consider the polynomial matrix<br />
D = [ -3*s s+2; 1-s 1];<br />
To find its column echelon <strong>for</strong>m and the associated unimodular matrix, type<br />
[E,U] = echelon(D)<br />
<strong>MATLAB</strong> returns<br />
E =<br />
U =<br />
2 + s -6<br />
1 -4 + s<br />
0 -1.0000<br />
1.0000 -3.0000<br />
<strong>The</strong> ultimate, most structured canonical <strong>for</strong>m <strong>for</strong> a polynomial matrix is its Smith<br />
<strong>for</strong>m. A polynomial matrix A( s)<br />
of rank r may be reduced to its Smith <strong>for</strong>m<br />
S ( s) U( s) A( s) V ( s)<br />
by pre- and post- multiplication by unimodular matrices U ( s)<br />
and<br />
V ( s),<br />
respectively. <strong>The</strong> Smith <strong>for</strong>m looks like this:<br />
S ( s)<br />
L N M<br />
S r ( s)<br />
0<br />
0 0<br />
O<br />
QP<br />
with the diagonal submatrix<br />
Sr ( s) diag( a1( s), a2 ( s), , ar ( s))<br />
<br />
41
Invariant<br />
polynomials<br />
<strong>The</strong> entries a1 ( s), a2 ( s), , ar ( s)<br />
are monic polynomials such that a1 ( s)<br />
divides<br />
ai+1 ( s)<br />
<strong>for</strong> i 1, 2, , r 1.<br />
<strong>The</strong> Smith <strong>for</strong>m is particularly useful <strong>for</strong> theoretical<br />
considerations as it reveals many important properties of the matrix. Its practical<br />
use, however is limited because it is quite sensitive to small parameter perturbations.<br />
<strong>The</strong> computation of the Smith <strong>for</strong>m becomes numerically troublesome as soon as the<br />
matrix size and degree become larger. <strong>The</strong> <strong>Polynomial</strong> <strong>Toolbox</strong> offers a choice of three<br />
different algorithms to achieve the Smith <strong>for</strong>m, all programmed in macro smith.<br />
For larger examples, a manual change of tolerance may be necessary. To compute the<br />
Smith <strong>for</strong>m of a simple matrix<br />
A=[1+s, 0, s+s^2; 0, s+2, 2*s+s^2]<br />
A =<br />
simply call<br />
smith(A)<br />
ans =<br />
1 + s 0 s + s^2<br />
0 2 + s 2s + s^2<br />
1 0 0<br />
0 2 + 3s + s^2 0<br />
<strong>The</strong> polynomials a1 ( s), a2 ( s), , ar ( s)<br />
that appear in the Smith <strong>for</strong>m are uniquely<br />
determined and are called the invariant polynomials of A( s).<br />
<strong>The</strong>y may be retrieved<br />
by typing<br />
diag(smith(A))<br />
ans =<br />
1<br />
2 + 3s + s^2<br />
<strong>Polynomial</strong> matrix equations<br />
Diophantine<br />
equations<br />
<strong>The</strong> simplest type of linear scalar polynomial equation — called Diophantine<br />
equation after the Alexandrian mathematician Diophantos (A.D. 275) — is<br />
a( s) x( s) b( s) y( s) c( s)<br />
<strong>The</strong> polynomials a( s),<br />
b( s)<br />
and c( s)<br />
are given while the polynomials x( s)<br />
and y( s)<br />
are<br />
unknown. <strong>The</strong> equation is solvable if and only if the greatest common divisor of a( s)<br />
and b( s)<br />
divides c( s)<br />
. This implies that with a( s)<br />
and b( s)<br />
coprime the equation is<br />
solvable <strong>for</strong> any right hand side polynomial, including c( s)<br />
1.<br />
<strong>The</strong> Diophantine equation possesses infinitely many solutions whenever it is<br />
solvable. If x( s),<br />
y( s)<br />
is any (particular) solution then the general solution of the<br />
Diophantine equation is<br />
42
x( s) x( s) b ( s) t( s)<br />
y( s) y( s) a ( s) t( s)<br />
Here t( s)<br />
is an arbitrary polynomial (the parameter) and a ( s)<br />
, b ( s)<br />
are coprime<br />
polynomials such that<br />
b ( s)<br />
b( s)<br />
<br />
a ( s)<br />
a( s)<br />
If the polynomials a( s)<br />
and b( s)<br />
themselves are coprime then one can naturally take<br />
a( s) a( s)<br />
and b ( s) b( s)<br />
.<br />
Among all the solutions of Diophantine equation there exists a unique solution pair<br />
x( s)<br />
, y( s)<br />
characterized by<br />
deg x( s) deg b ( s)<br />
.<br />
<strong>The</strong>re is another (generally different) solution pair characterized by<br />
deg y( s) deg a( s)<br />
.<br />
<strong>The</strong> two special solution pairs coincide only if<br />
deg a( s) deg b( s) deg c( s)<br />
.<br />
<strong>The</strong> <strong>Polynomial</strong> <strong>Toolbox</strong> offers two basic solvers that may be used <strong>for</strong> scalar and<br />
matrix Diophantine equations. <strong>The</strong>y are suitably named axbyc and xaybc. For<br />
example, consider the simple polynomials<br />
a = 1+s+s^2; b = 1-s; c = 3+3*s;<br />
When typing<br />
[x,y] = axbyc(a,b,c)<br />
x =<br />
y =<br />
2<br />
1 + 2s<br />
b g b g<br />
<strong>MATLAB</strong> returns the solution pair x( s), y( s) <strong>The</strong> alternative call<br />
2, 1 2 s with minimal overall degree.<br />
[x,y,f,g] = axbyc(a,b,c)<br />
x =<br />
y =<br />
f =<br />
g =<br />
2<br />
1 + 2s<br />
-0.45 + 0.45s<br />
0.45 + 0.45s + 0.45s^2<br />
retrieves the complete general solution in the <strong>for</strong>m x( s) f ( s) t( s), y( s) g( s) t( s)<br />
an arbitrary polynomial parameter t( s)<br />
.<br />
43<br />
b g with
Bézout<br />
equations<br />
Matrix<br />
polynomial<br />
equations<br />
To investigate the case of different minimal degree solutions, consider a right hand<br />
side of higher degree<br />
c = 15+15*s^4;<br />
As be<strong>for</strong>e, the call<br />
[x1,y1] = axbyc(a,b,c)<br />
x1 =<br />
y1 =<br />
8 - 13s + 15s^2<br />
7 + 12s + 2s^2<br />
results in the solution of minimal overall degree (in this case deg x1 deg y1<br />
2 ).<br />
A slightly different command<br />
[x2,y2] = axbyc(a,b,c,'minx')<br />
x2 =<br />
y2 =<br />
10.0000<br />
5 - 5s - 15s^2 - 15s^3<br />
returns another solution with the minimal degree of the first unknown. Finally,<br />
typing<br />
[x2,y2] = axbyc(a,b,c,'miny')<br />
x2 =<br />
y2 =<br />
10 - 15s + 15s^2<br />
5 + 10s<br />
produces the solution of minimal degree in the second unknown.<br />
Should the equation be unsolvable, the function returns NaNs..<br />
[x,y] = axbyc(s,s,1)<br />
x =<br />
y =<br />
NaN<br />
NaN<br />
A Diophantine equation with 1 on its right hand side is called a Bézout equation. It<br />
may look like<br />
a( s) x( s) b( s) y( s)<br />
1<br />
with a( s),<br />
b( s)<br />
given and x( s)<br />
, y( s)<br />
unknown.<br />
In the matrix case, the polynomial equation becomes a polynomial matrix equation.<br />
<strong>The</strong> basic matrix polynomial (or polynomial matrix) equations are<br />
and<br />
A( s) X ( s) B( s)<br />
X ( s) A( s) B( s)<br />
44
One-sided<br />
equations<br />
Two-sided<br />
equations<br />
or even<br />
A( s) X ( s) B( s) C( s)<br />
A( s),<br />
B( s)<br />
and, if applicable, C( s)<br />
are given while X ( s)<br />
is unknown. <strong>The</strong> <strong>Polynomial</strong><br />
<strong>Toolbox</strong> functions to solve these equations are conveniently named axb, xab, and<br />
axbc. Hence, given the polynomial matrices<br />
A= [1 s 1+s; s-1 1 0]; B = [s 0; 0 1];<br />
the call<br />
X0 = axb(A,B)<br />
X0 =<br />
1 -1<br />
1 - s s<br />
-1 + s 1 – s<br />
solves the first equation and returns its solution of minimal overall degree. Typing<br />
[X0,K] = axb(A,B); K<br />
K =<br />
1.7 + 1.7s<br />
1.7 - 1.7s^2<br />
-1.7 - 1.7s + 1.7s^2<br />
also computes the right null-space of A so that all the solutions to A( s) X ( s) B( s)<br />
may easily be parameterized as<br />
X ( s) X0 ( s) K ( s) T ( s)<br />
T ( s)<br />
is an arbitrary polynomial matrix of compatible size. <strong>The</strong> other equations are<br />
handled similarly.<br />
In systems and control several special <strong>for</strong>ms of polynomial matrix equations are<br />
frequently encountered, in particular the one-sided equations<br />
and<br />
A( s) X ( s) B( s) Y ( s) C( s)<br />
X ( s) A( s) Y ( s) B( s) C( s)<br />
Also the two-sided equations<br />
and<br />
A( s) X ( s) Y ( s) B( s) C( s)<br />
X ( s) A( s) B( s) Y ( s) C( s)<br />
are common. A( s), B( s)<br />
and C( s)<br />
are always given while X ( s)<br />
and Y ( s)<br />
are to be<br />
computed. A( s)<br />
is typically square invertible.<br />
<strong>The</strong> solutions of the one- and two-sided equations may be found with the help of the<br />
<strong>Polynomial</strong> <strong>Toolbox</strong> macros axbyc, xaybc, and axybc . Thus, <strong>for</strong> the matrices<br />
A= [1 s; 1+s 0]; B = [s 1; 1 s]; C = [1 0; 0 1];<br />
the call<br />
[X,Y] = axbyc(A,B,C)<br />
45
Factorizations<br />
returns<br />
X =<br />
Y =<br />
0.25 - 0.5s 0.5<br />
0.25 + 0.5s -0.5<br />
-0.25 - 0.5s 0.5<br />
0.75 + 0.5s -0.5<br />
Various other scalar and matrix polynomial equations may be solved by directly<br />
applying appropriate solvers programmed in the <strong>Polynomial</strong> <strong>Toolbox</strong>, such as the<br />
equation<br />
<br />
A( s) X ( s) X ( s) A ( s) B( s)<br />
Table 2 lists all available polynomial matrix equation solvers.<br />
Table 2. Equation solvers<br />
Equation Name of the routine<br />
AX B<br />
axb<br />
AXB C<br />
axbc<br />
AX BY C<br />
axbyc<br />
<br />
A X X A B<br />
axxab<br />
<br />
A X Y A B<br />
axyab<br />
AX YB C<br />
axybc<br />
<br />
XA AX B<br />
xaaxb<br />
XA B<br />
xab<br />
XA YB C<br />
xaybc<br />
To indicate the polynomial matrix equation is unsolvable, the solver returns matrices<br />
of correct sizes but full of NaNs.<br />
X0 = axb(A,B)<br />
X0 =<br />
NaN NaN<br />
NaN NaN<br />
46
Symmetric<br />
polynomial<br />
matrices<br />
Zeros of<br />
symmetric<br />
matrices<br />
Besides linear equations special quadratic equations in scalar and matrix<br />
polynomials are encountered in various engineering fields. <strong>The</strong> equations typically<br />
contain a symmetric polynomial matrix on the right hand side.<br />
A square polynomial matrix M which is equal to its conjugated transpose M' is called<br />
(para-Hermitian) symmetric. In the scalar case with real coefficients,<br />
2 4<br />
2 3 4 <br />
<br />
M s s s M s<br />
only even coefficients are nonzero. In the scalar case with complex coefficients,<br />
2 3 4<br />
2 3 3 4 <br />
<br />
M s is s is s M s<br />
even coefficients are real while odd ones are imaginary. In the matrix case<br />
or<br />
2 1 0 1 3203 M s s s s M s<br />
1 2<br />
<br />
10 <br />
2 4<br />
<br />
3 0<br />
<br />
<br />
2 3<br />
<br />
<br />
2 2i 1 i i 1<br />
i<br />
M s s M s<br />
1 i 2 2i <br />
1 i 7i<br />
<br />
<br />
<br />
<br />
even coefficients are (Hermitian) symmetric, odd coefficients are (Hermitian) antisymmetric.<br />
M s as a function of complex variable, we can set s to various points s i of<br />
the complex plane. Thus we obtain numerical matrices Ms i that are (Hermitian)<br />
symmetric and hence they have only real eigenvalues. If all eigenvalues of Ms i are<br />
positive <strong>for</strong> all s i on the imaginary axis then Ms is called positive definite. Such a<br />
polynomial matrix, of course, has no zeros on the imaginary axis. If all the<br />
eigenvalues are nonnegative then Ms is called nonnegative definite. Yet another<br />
important class consists of matrices with constant number of positive and negative<br />
eigenvalues of Ms i over the whole all imaginary axis. Such Ms is called<br />
indefinite (in matrix sense). All the above cases can be factorized. Should the number<br />
of positive and negative eigenvalues change as s i ranges the imaginary axis,<br />
however, the matrix Ms is indefinite in the scalar sense and cannot be factorized in<br />
any symmetric sense.<br />
Treating <br />
Nonnegative definite symmetric polynomial matrix Ms with real coefficients has<br />
its zeros distributed symmetrically with respect to the imaginary axis. Thus if it has<br />
a zero s i , it generally has a quadruple of zeros si , si, si , si<br />
, all of the same<br />
multiplicity. If s i is real, then it is only a couple si, si.<br />
If s i is imaginary, then it is<br />
also a couple si, s i but it must be of even multiplicity. Finally, if si 0 then it is a<br />
singlet of even multiplicity.<br />
zpplot(4+s^4), grid<br />
47
zpplot(1-s^2), grid<br />
zpplot(1+2*s^2+s^4), grid<br />
48
zpplot(-s^2), grid<br />
For polynomial matrices with complex coefficients, the picture is more simple:<br />
generally a couple si, si,<br />
especially in imaginary case (including the zero case) a<br />
singlet with even multiplicity.<br />
zpplot(2+2*i*s-s^2),grid<br />
49
Spectral<br />
factorization<br />
zpplot(1+2*i*s-s^2),grid<br />
One of quadratic equations in scalar and matrix polynomials frequently encountered<br />
is the polynomial spectral factorization<br />
A( s) X (<br />
s) JX ( s)<br />
and the spectral co-factorization<br />
A( s) X ( s) JX (<br />
s)<br />
.<br />
In either case, the given polynomial matrix A( s)<br />
satisfies A( s) A ( s)<br />
<br />
(we say it is<br />
para-Hermitian symmetric) and the unknown X ( s)<br />
is to be stable. <strong>The</strong> case of A( s)<br />
positive definite on the stability boundary results in J I .<br />
Spectral factorization with J = I is the main tool to design LQ and LQG controllers as<br />
well as Kalman filters. On the other hand, if A( s)<br />
is indefinite in matrix sense then<br />
J diag l+1, + 1, , + 1, 1, 1, 1q<br />
. This is the famous J -spectral factorization<br />
problem, which is an important tool <strong>for</strong> robust control and filter design based on H <br />
50
norms. <strong>The</strong> <strong>Polynomial</strong> <strong>Toolbox</strong> provides two macros called spf and spcof to<br />
handle spectral factorization and co-factorization, respectively.<br />
By way of illustration consider the para-Hermitian matrix<br />
A =<br />
34 - 56s^2 -13 - 22s + 60s^2 36 + 67s<br />
-13 + 22s + 60s^2 46 - 1e+002s^2 -42 - 26s + 38s^2<br />
36 - 67s -42 + 26s + 38s^2 59 - 42s^2<br />
Its spectral factorization follows by typing<br />
[X,J] = spf(A)<br />
X =<br />
J =<br />
2.1 + 0.42s 5.2 + 0.39s -2 + 0.35s<br />
-5.5 + 4s 4.3 + 0.64s -7.4 - 5.5s<br />
0.16 + 6.3s -0.31 - 10s -0.86 + 3.5s<br />
1 0 0<br />
0 1 0<br />
0 0 1<br />
while the spectral co-factorization of A is computed via<br />
[Xcof,J] = spcof(A)<br />
Xcof =<br />
J =<br />
2.7 + 0.42s 4.8 + 4s 2 + 6.3s<br />
-1.6 + 0.39s 0.93 + 0.64s -6.5 - 10s<br />
4.3 + 0.35s 2.7 - 5.5s 5.8 + 3.5s<br />
1 0 0<br />
0 1 0<br />
0 0 1<br />
<strong>The</strong> resulting J reveals that the given matrix A is positive-definite on the imaginary<br />
axis. On the other hand, the following matrix is indefinite<br />
B =<br />
5 -6 - 18s -8<br />
-6 + 18s -41 + 81s^2 -22 - 18s<br />
-8 -22 + 18s -13<br />
Its spectral factorization follows as<br />
[Xf,J] = spf(B)<br />
Xf =<br />
J =<br />
3.3 2.9 - 0.92s -1.6<br />
1.8 6.6 + 0.44s 3.4<br />
1.6 2.5 + 9s -2<br />
51
Non-symmetric<br />
factorization<br />
or<br />
1 0 0<br />
0 -1 0<br />
0 0 -1<br />
[Xcof,J] = spcof(B)<br />
Xcof =<br />
J =<br />
3.3 -1.8 1.6<br />
-1.9 + 0.92s -4.4 + 0.44s -5 - 9s<br />
-1.6 -3.4 -2<br />
1 0 0<br />
0 -1 0<br />
0 0 -1<br />
General non-symmetric factorization can be computed by function fact which<br />
returns a factor with zeros arbitrarily selected from the zeros of the original matrix.<br />
For example, it is sometimes useful to express a given non-symmetric polynomial<br />
matrix As as the product of its anti-stable and stable factors A Aantistab Astab<br />
. So <strong>for</strong><br />
factorize polynomial matrix<br />
A=prand(3,3,'int')<br />
A =<br />
-1 + s + 2s^2 + 7s^3 -2 + 3s + 4s^2 + 5s^3 -5 + s + s^2 - 5s^3<br />
-4 + 5s - s^2 + 2s^3 -3s - 3s^2 + 2s^3 -5 + 4s + 3s^2 - 4s^3<br />
-1 - 6s - 11s^2 + 5s^3 -3 + 2s + s^2 -5 - 5s - 2s^2 - 6s^3<br />
having stable as well as unstable zeros<br />
r=roots(A)<br />
r =<br />
4.2755<br />
-1.6848<br />
-0.8988 + 0.5838i<br />
-0.8988 - 0.5838i<br />
0.5927 + 0.7446i<br />
0.5927 - 0.7446i<br />
0.3336<br />
-0.1425 + 0.2465i<br />
-0.1425 - 0.2465i<br />
the stable factor should have all the stable zeros<br />
r_stab=r(find(real(r)
-0.8988 + 0.5838i<br />
-0.8988 - 0.5838i<br />
-0.1425 + 0.2465i<br />
-0.1425 - 0.2465i<br />
while the anti-stable should have all the remaining (unstable) ones. <strong>The</strong> (right) stable<br />
factor is directly returned by typing<br />
A_stab=fact(A,r_stab)<br />
A_stab =<br />
-5.9 - 4.1s + s^2 4.3 + 3.3s -2<br />
-7.2 - 6.5s 5.6 + 5.4s + s^2 -2<br />
5.2 + 4.2s -3.3 - 2.8s 2.5 + s<br />
<strong>The</strong> anti-stable is then computed indirectly as<br />
A_antistab=xab(A_stab,A)<br />
A_antistab =<br />
53<br />
A AA<br />
-1<br />
antistab stab<br />
26 + 28s -18 - 9.2s 4.3 + 14s - 5s^2<br />
-19 + 19s 8.5 - 9.3s -10 + 13s - 4s^2<br />
-22 + 30s 9.3 - 17s -12 + 13s - 6s^2<br />
<strong>The</strong> result is correct as<br />
and<br />
isequal(A_antistab*A_stab,A)<br />
ans =<br />
1<br />
roots(A_stab),roots(A_antistab)<br />
ans =<br />
-1.6848<br />
-0.8988 + 0.5838i<br />
-0.8988 - 0.5838i<br />
-0.1425 + 0.2465i<br />
-0.1425 - 0.2465i<br />
ans =<br />
Matrix pencil routines<br />
4.2755<br />
0.5927 + 0.7446i<br />
0.5927 - 0.7446i<br />
0.3336<br />
Matrix pencils are polynomial matrices of degree 1. <strong>The</strong>y arise in the study of<br />
continuous- and discrete-time linear time-invariant state space systems given by
Trans<strong>for</strong>mation<br />
to Kronecker<br />
canonical <strong>for</strong>m<br />
x( t) Ax( t) Bu( t)<br />
y( t) Cx( t) Du( t)<br />
x( t 1)<br />
Ax( t) Bu( t)<br />
y( t) Cx( t) Du( t)<br />
and continuous- and discrete-time descriptor systems given by<br />
Ex( t) Ax( t) Bu( t)<br />
Ex( t 1)<br />
Ax( t) Bu( t)<br />
y( t) Cx( t) Du( t)<br />
y( t) Cx( t) Du( t)<br />
<strong>The</strong> transfer matrix of the descriptor system is<br />
1<br />
H ( s) C( sE A) B D<br />
in the continuous-time case, and<br />
1<br />
H ( s) C( zE A) B D<br />
in the discrete-time case. <strong>The</strong> polynomial matrices<br />
sE A , zE A<br />
that occur in these expressions are matrix pencils. In the state space case they reduce<br />
to the simpler <strong>for</strong>ms sI A and zI A .<br />
A nonsingular square real matrix pencil P( s)<br />
may be trans<strong>for</strong>med to its Kronecker<br />
canonical <strong>for</strong>m<br />
C( s) QP( s) Z <br />
L<br />
NM<br />
a sI<br />
0<br />
0<br />
O<br />
QP<br />
I se<br />
Q and Z are constant orthogonal matrices, a is a constant matrix whose eigenvalues<br />
are the negatives of the roots of the pencil, and e is a nilpotent constant matrix. (That<br />
is, there exists a nonnegative integer k such that e i 0 <strong>for</strong> i k . <strong>The</strong> integer k is<br />
called the nilpotency of e.)<br />
This trans<strong>for</strong>mation is very useful <strong>for</strong> the analysis of descriptor systems because it<br />
separates the finite and infinite roots of the pencil and, hence, the corresponding<br />
modes of the system.<br />
By way of example we consider the descriptor system<br />
L<br />
M<br />
O<br />
P<br />
L<br />
M<br />
O<br />
P<br />
L<br />
M<br />
O<br />
P<br />
1 0 0 1 0 0 1<br />
M0<br />
0 1 0 1 0 0<br />
NM<br />
P<br />
0 0 0QP<br />
0 0 1 1<br />
1 1 0 2<br />
<br />
<br />
M<br />
NM<br />
P<br />
QP<br />
<br />
x x M<br />
NM<br />
Pu<br />
QP<br />
2<br />
E A B<br />
y x u <br />
<br />
C<br />
D<br />
<strong>The</strong> system is defined by the matrices<br />
A = [ -1 0 0; 0 1 0; 0 0 1 ];<br />
B = [ 1; 0; 1 ];<br />
C = [ 1 -1 0 ];<br />
D = 2;<br />
E = [ 1 0 0; 0 0 1; 0 0 0 ];<br />
We compute the canonical <strong>for</strong>m of the pencil sE A by typing<br />
c = pencan(s*E-A)<br />
54
Trans<strong>for</strong>mation<br />
to Clements<br />
<strong>for</strong>m<br />
Pencil<br />
Lyapunov<br />
equations<br />
c =<br />
1 + s 0 0<br />
0 1 -s<br />
0 0 1<br />
Inspection shows that a = 1 has dimensions 1 1 and that e is the 2 2 nilpotent<br />
matrix<br />
e <br />
L<br />
NM<br />
O<br />
QP<br />
0 1<br />
0 0<br />
<strong>The</strong> descriptor system has a finite pole at 1 and two infinite poles.<br />
A para-Hermitian real pencil P( s) sE A , with E skew-symmetric and A<br />
symmetric, may — under the assumption that it has no roots on the imaginary axis<br />
— be trans<strong>for</strong>med into its ―Clements‖ <strong>for</strong>m (Clements, 1993) according to<br />
T<br />
C( s) UP( s) U <br />
L<br />
M<br />
NM<br />
0 0<br />
0<br />
55<br />
sE A<br />
1 1<br />
A sE A<br />
2 3 3<br />
T T T T<br />
1 1 3 3 4 3<br />
sE A sE A sE A<br />
<strong>The</strong> constant matrix U is orthogonal and the finite roots of the pencil sE1 A1<br />
all<br />
have negative real parts. This trans<strong>for</strong>mation is needed <strong>for</strong> the solution of various<br />
spectral factorization problems that arise in the solution of H 2 and H optimization<br />
problems <strong>for</strong> descriptor systems.<br />
Let the para-Hermitian pencil P be defined as<br />
P( s)<br />
<br />
L<br />
M<br />
N<br />
M<br />
100 0.<br />
01 s 0<br />
0. 01 0.<br />
01 0 1<br />
s 0 1<br />
0<br />
0 1 0 0<br />
It Clements <strong>for</strong>m follows by typing<br />
O<br />
P<br />
Q<br />
P<br />
P = [100 -0.01 s 0; -0.01 -0.01 0 1; -s 0 -1 0; 0 1 0 0];<br />
C = pzer(clements(P))<br />
C =<br />
0 0 0 10 - s<br />
0 -1 0 -3.5e-005 -<br />
0.0007s<br />
0 0 1 -0.014 + 0.00071s<br />
10 + s -3.5e-005 + 0.0007s -0.014 - 0.00071s 99<br />
<strong>The</strong> pzer function is included in the command to clear small coefficients in the<br />
result.<br />
When working with matrix pencils the two-sided matrix pencil equation<br />
A( s) X YB( s) C( s)<br />
is sometimes encountered. A and B are square matrix pencils and C is a rectangular<br />
pencil with as many rows as A and as many columns as B. If A and B have no<br />
O<br />
P<br />
QP
common roots (including roots at infinity) then the equation has a unique solution<br />
pair X, Y with both X and Y a constant matrix.<br />
By way of example, let<br />
<strong>The</strong>n<br />
A = s+1;<br />
B = [ s 0<br />
1 2];<br />
C = [ 3 4 ];<br />
[X,Y] = plyap(A,B,C)<br />
gives the solution<br />
X =<br />
Y =<br />
1 0<br />
-1 2<br />
56
3 Discrete-time and two-sided<br />
polynomial matrices<br />
Introduction<br />
Basic operations<br />
Two-sided<br />
polynomials<br />
Leading and<br />
trailing degrees<br />
and coefficients<br />
<strong>Polynomial</strong>s in s and p introduced in Chapter 2, <strong>Polynomial</strong> matrices are usually<br />
associated with continuous-time differential operators, acting on signals xt . Here<br />
we introduce polynomials in z and 1<br />
z , typically associated with discrete-time shift<br />
operators acting on signals x t with integer t.<br />
<strong>The</strong> advance shift z acts zxt xt1, the delay shift<br />
1<br />
inverses each of others zz 1.<br />
57<br />
1<br />
z 1<br />
acts z xt xt1<br />
. <strong>The</strong>y are<br />
Many operations and functions work <strong>for</strong> discrete-time polynomials exactly as<br />
described be<strong>for</strong>e in Chapter 2, <strong>Polynomial</strong> matrices. Here we present what is<br />
different.<br />
1<br />
Both z and z can be used as an indeterminate variable <strong>for</strong> polynomials. Here z^-1<br />
can be shortened to zi. In arithmetic <strong>for</strong>mulae when only one of them is used, the<br />
usual polynomial algebra is followed. However, when both of them are used, the<br />
1<br />
relation zz 1 is respected.<br />
(3*z^3+4*z^4+5*z^5)*zi^2<br />
ans =<br />
3z + 4z^2 + 5z^3<br />
It may happen that both powers of z and those of<br />
(1+z)*(1+zi)<br />
ans =<br />
z^-1 + 2 + z<br />
1<br />
z remain in the result<br />
In such a way, new objects, two-sided polynomials (tsp) are created. <strong>The</strong>ir algebra is<br />
naturally similar to that of polynomials.<br />
Besides degree deg and leading coefficient lcoef, two-sided polynomials have<br />
trailing degree tdeg and trailing coefficient tcoef:<br />
T=[2*zi+1+3*z^3, 2*zi; zi^2, 1+z*2]<br />
T =<br />
2z^-1 + 1 + 3z^3 2z^-1<br />
z^-2 1 + 2z<br />
deg(T),lcoef(T)
Derivatives and<br />
integrals<br />
ans =<br />
ans =<br />
3<br />
3 0<br />
0 0<br />
tdeg(T),tcoef(T)<br />
ans =<br />
ans =<br />
-2<br />
0 0<br />
1 0<br />
In deriv command, the derivation is taken with respect to the indeterminate<br />
variable:<br />
F=1+z+z^2;<br />
deriv(F)<br />
ans =<br />
1 + 2z<br />
G=1+z^-1+z^-2;<br />
deriv(G)<br />
ans =<br />
1 + 2z^-1<br />
For two-sided polynomials, unless stated otherwise, the derivative is taken with<br />
respect to z:<br />
T=z^-2+z^-1+1+z+z^2;<br />
deriv(T)<br />
ans =<br />
-2z^-3 - z^-2 + 1 + 2z<br />
We can also explicitly state the independent variable <strong>for</strong> the derivative<br />
deriv(T,z)<br />
ans =<br />
-2z^-3 - z^-2 + 1 + 2z<br />
deriv(T,zi)<br />
ans =<br />
2z^-1 + 1 - z^2 - 2z^3<br />
<strong>The</strong> same holds <strong>for</strong> integrals. As the integral of 1 z is log z , we obtain the logarithmic<br />
term separately:<br />
[intT,logterm]=integral(T)<br />
intT =<br />
58
Roots and<br />
stability<br />
Norms<br />
Conjugations<br />
-z^-1 + z + 0.5z^2 + 0.33z^3<br />
logterm =<br />
z<br />
<strong>The</strong> roots of polynomials in z and 1<br />
z , unless stated otherwise, are understood in the<br />
complex variable equal to the indeterminate:<br />
roots(z-0.5)<br />
ans =<br />
0.5000<br />
roots(zi-0.5)<br />
ans =<br />
0.5000<br />
However, we can state the variable <strong>for</strong> roots explicitely:<br />
>> roots(zi-0.5,z)<br />
ans =<br />
2.0000<br />
<strong>The</strong> stability region <strong>for</strong> z is the interior of the unit disc<br />
isstable(z-0.5)<br />
ans =<br />
while <strong>for</strong><br />
1<br />
1<br />
z it is the exterior of the unit disc<br />
>> isstable(zi-0.5)<br />
ans =<br />
0<br />
>> isstable(zi+2)<br />
ans =<br />
1<br />
<strong>Polynomial</strong>s in z and 1<br />
z may be considered (by means of z-trans<strong>for</strong>m), as discretetime<br />
signals of finite duration. <strong>The</strong> H2-norm of such a signal may be found<br />
h2norm(1+z^-1+z^-2)<br />
ans =<br />
1.7321<br />
while its H∞-norm is returned by typing<br />
hinfnorm(1+z^-1+z^-2)<br />
ans =<br />
3<br />
59
Conjugate<br />
transpose<br />
<strong>The</strong> operation of conjugate transposition is connected with scalar product of discrete-<br />
1<br />
time signals. Here the theory requires z z <br />
1 or z z and, indeed, the<br />
<strong>Polynomial</strong> <strong>Toolbox</strong> returns<br />
or<br />
z'<br />
ans =<br />
zi'<br />
ans =<br />
z^-1<br />
z<br />
For more complex polynomial matrices, the operation consists of the complex<br />
conjugation, the transposition and the change of argument. So<br />
C=[1+z 1;2 z]+i*[z 1;1 z]<br />
C =<br />
D=C'<br />
D =<br />
1+0i + (1+1i)z 1+1i<br />
2+1i (1+1i)z<br />
1+0i + (1-1i)z^-1 2-1i<br />
1-1i (1-1i)z^-1<br />
Hence, conjugate of a polynomial in z is a polynomial in<br />
D'<br />
ans =<br />
1+0i + (1+1i)z 1+1i<br />
2+1i (1+1i)z<br />
For two-sided polynomials, the operation works alike<br />
T=[z 2-z; 1+zi 1+z-zi]<br />
T =<br />
T'<br />
ans =<br />
z 2 - z<br />
z^-1 + 1 -z^-1 + 1 + z<br />
z^-1 1 + z<br />
-z^-1 + 2 z^-1 + 1 – z<br />
60<br />
1<br />
z vice versa2 2 Clearly, this operation with discrete-time polynomials is in Version 3 of the<br />
<strong>Polynomial</strong> <strong>Toolbox</strong> implemented more naturally than be<strong>for</strong>e. On the other hand,<br />
users familiar with older versions should beware of this incompatibility.
Symmetric<br />
two-sided<br />
polynomial<br />
matrices<br />
In particular, a two-sided polynomial matrix that equals it conjugate is called (para<br />
Hermitian) symmetric. So is, <strong>for</strong> example,<br />
M=[zi+2+z 3*zi+4+5*z;3*z+4+5*zi zi+3+z]<br />
M =<br />
M'<br />
ans =<br />
z^-1 + 2 + z 3z^-1 + 4 + 5z<br />
5z^-1 + 4 + 3z z^-1 + 3 + z<br />
z^-1 + 2 + z 3z^-1 + 4 + 5z<br />
5z^-1 + 4 + 3z z^-1 + 3 + z<br />
Expressed by matrix coefficients as<br />
<br />
M z M z M M z ,<br />
1<br />
1<br />
0 1<br />
the matrix 0 M must be symmetric and the matrices M 1, M1<br />
must be transposed each<br />
to other. Indeed,<br />
and<br />
M{0}<br />
ans =<br />
M{-1}<br />
ans =<br />
M{1}<br />
ans =<br />
or, together,<br />
2 4<br />
4 3<br />
1 3<br />
5 1<br />
1 5<br />
3 1<br />
p<strong>for</strong>mat coef<br />
M<br />
Two-sided polynomial matrix in z: 2-by-2,degree: 1, tdegree: -1<br />
M =<br />
Matrix coefficient at z^-1 :<br />
1 3<br />
5 1<br />
Matrix coefficient at z^0 :<br />
2 4<br />
4 3<br />
Matrix coefficient at z^1 :<br />
61
Zeros of<br />
symmetric twosided<br />
polynomial<br />
matrices<br />
1 5<br />
3 1<br />
In case of complex coefficients, the complex conjugacy steps also into the play: in the<br />
above, "symmetric" is replaced by "conjugate symmetric" (Hermitian symmetric) and<br />
"transposed" by "conjugate transpose" (Hermitian transpose).<br />
p<strong>for</strong>mat<br />
M=[(1-i)*zi+2+(1+i)*z (4+5*i)+(6+7*i)*z;(4-5*i)+(6-7*i)*zi ...<br />
(1-i)*zi+2+(1+i)*z]<br />
M =<br />
(1-1i)z^-1 + 2+0i + (1+1i)z 4+5i + (6+7i)z<br />
(6-7i)z^-1 + 4-5i (1-1i)z^-1 + 2+0i + (1+1i)z<br />
p<strong>for</strong>mat coef<br />
M<br />
Two-sided polynomial matrix in z: 2-by-2,degree: 1, tdegree: -1<br />
M =<br />
Matrix coefficient at z^-1 :<br />
1.0000 - 1.0000i 0<br />
6.0000 - 7.0000i 1.0000 - 1.0000i<br />
Matrix coefficient at z^0 :<br />
2.0000 4.0000 + 5.0000i<br />
4.0000 - 5.0000i 2.0000<br />
Matrix coefficient at z^1 :<br />
1.0000 + 1.0000i 6.0000 + 7.0000i<br />
0 1.0000 + 1.0000i<br />
A nonnegative definite symmetric two-sided polynomial matrix Mz with real<br />
coefficients has its zeros distributed symmetrically with respect to the unit circle.<br />
1 1<br />
Thus if it has a zero z i , it generally has a quadruple of zeros zi , zi , zi , zi<br />
, all of the<br />
1<br />
same multiplicity. If z i is real, then it is only a couple zi, zi . If z i is on the unit circle,<br />
then it is also a couple zi, z i but it must be of even multiplicity. Finally, if zi 1 then<br />
it is a singlet of even multiplicity.<br />
zpplot(sdf(2*z^-2+6*z^-1+9+6*z+2*z^2)), grid<br />
62
zpplot(sdf(2*z^-2+6*z^-1+9+6*z+2*z^2)), grid<br />
zpplot(sdf(z^-2+2*sqrt(2)*z^-1+4+2*sqrt(2)*z+z^2)), grid)<br />
zpplot(sdf(z^-2+4*z^-1+6+4*z+z^2)), grid<br />
63
For polynomial matrices with complex coefficients, the picture is simpler: generally a<br />
z z ,<br />
couple ,<br />
i i<br />
zpplot(sdf((1-i)*z^-1+3+(1+i)*z)), grid<br />
specially in the unit circle case a singlet with even multiplicity<br />
a=sqrt(2)/2; zpplot(sdf(a*(1-i)*z^-1+2+a*(1+i)*z)), grid<br />
64
Matrix equations with discrete-time and two-sided polynomials<br />
Symmetric<br />
bilateral matrix<br />
equations<br />
All the various matrix equations introduced in Chapter 2, <strong>Polynomial</strong> matrices are of<br />
the same use <strong>for</strong> discrete-time and two-sided polynomials as well. Moreover, most of<br />
them look identically <strong>for</strong> standard and discrete-time polynomials. Only equations<br />
that include conjugation differ. Due to a different meaning of discrete-time<br />
conjugation, discrete-time polynomials in different operators as well as two-sided<br />
polynomials may encounter in such an equation.<br />
A frequently encountered polynomial matrix equation with conjugation is<br />
AXXA B<br />
where AXare , polynomial matrices in z while A, Xare<br />
1<br />
polynomial matrices in z <br />
1<br />
and B is a symmetric two-sided polynomial matrix in z and z . As usually, A and<br />
hence A is given, B is also given while X is to be computed along with its conjugate<br />
X .<br />
<strong>The</strong> <strong>Polynomial</strong> <strong>Toolbox</strong> function to solve this equation is conveniently named axxab.<br />
Hence, given the polynomial matrix<br />
A=[1+5*z z;0 1-2*z]<br />
A =<br />
1 + 5z z<br />
0 1 - 2z<br />
and the symmetric two-sided polynomial matrix<br />
the call<br />
B=[7*z^-1+24+7*z,0;0 4*z^-1-10+4*z]<br />
B =<br />
7z^-1 + 24 + 7z 0<br />
0 4z^-1 - 10 + 4z<br />
X=axxab(A,B)<br />
X =<br />
solves the equation<br />
0.96 + 2.2z -0.49z<br />
0.23 -1.2 + 1.7z<br />
1 1<br />
5z 1 0 15z z 7z 24 7z 0 <br />
X X<br />
1 1 <br />
1<br />
z 2z 1 0 1 2z<br />
<br />
<br />
0 4z 10 4z<br />
It is easy to check that<br />
A'*X+X'*A-B<br />
ans =<br />
0 0<br />
0 0<br />
Another symmetric equation with conjugation is<br />
65
Non-symmetric<br />
equation<br />
Discrete-time<br />
spectral<br />
factorization<br />
XAAX B<br />
where again AXare , polynomial matrices in z , A, Xare<br />
polynomial matrices in<br />
and B is a symmetric two-sided polynomial matrix. As be<strong>for</strong>e, A and hence A is<br />
given and B is also given. <strong>The</strong> unknown X along with its conjugate X can be<br />
computed using the <strong>Polynomial</strong> <strong>Toolbox</strong> function xaaxb<br />
X=xaaxb(A,B)<br />
X =<br />
1 + 2.2z -0.2<br />
0.2 - 0.4z -1 + 2z<br />
Equations with conjugations may also be non-symmetric such as<br />
axya b<br />
1<br />
where a is a given polynomial in z , a is a polynomial in z (also given as it is the<br />
conjugate of a ) and b is a given two-sided polynomial, not necessarily symmetric.<br />
<strong>The</strong> equation has two unknowns, xy, , both polynomials in z . Notice that the second<br />
unknown appears in the equation in its conjugate, y which is actually a polynomial<br />
1<br />
in z . This equation is of a special use only and hence only a scalar version of its<br />
solver is programmed in the <strong>Polynomial</strong> <strong>Toolbox</strong> function axyab . Given a polynomial<br />
a=(1+2*z)*(1-3*z)<br />
a =<br />
1 - z - 6z^2<br />
and a non-symmetric two-sided polynomial<br />
b=z^-1 +2*z<br />
b =<br />
z^-1 + 2z<br />
the solution is computed by calling<br />
[x,y] = axyab(a,b)<br />
x =<br />
y =<br />
0.006 - 0.24z + 0.071z^2<br />
0.012 - 0.39z + 0.036z^2<br />
Also special quadratic equations called spectral factorizations contain conjugations as<br />
explained in Chapter 2, <strong>Polynomial</strong> matrices. In discrete-time spectral factorizations,<br />
1<br />
there<strong>for</strong>e, polynomials in z meet polynomials in z as well as two-sided<br />
polynomials.<br />
This happens in the spectral factorization<br />
A z z X z X z<br />
1 ( , ) (<br />
) ( )<br />
66<br />
1<br />
z
as well as in the spectral co-factorization<br />
A z z X z X z .<br />
1 ( , ) ( ) (<br />
)<br />
By way of illustration consider the symmetric polynomial<br />
b =<br />
roots(b)<br />
-6z^-2 + 5z^-1 + 38 + 5z - 6z^2<br />
3.0000<br />
-2.0000<br />
-0.5000<br />
0.3333<br />
Its spectral factorization follows by typing<br />
x=spf(b)<br />
x =<br />
Indeed,<br />
-1 + z + 6z^2<br />
roots(x)<br />
ans =<br />
x*x'<br />
ans =<br />
-0.5000<br />
0.3333<br />
-6z^-2 + 5z^-1 + 38 + 5z - 6z^2<br />
When working with polynomials in<br />
typing<br />
xi=pol(x*zi^deg(x))<br />
xi =<br />
<strong>The</strong>n again<br />
xi*xi'<br />
ans =<br />
but, naturally,<br />
6 + z^-1 - z^-2<br />
-6z^-2 + 5z^-1 + 38 + 5z - 6z^2<br />
roots(xi)<br />
ans =<br />
3.0000<br />
-2.0000<br />
1<br />
z , you may convert the result in those simply by<br />
67
Resampling<br />
Sampling<br />
period<br />
Discrete-time and two-sided polynomials often describe, discrete-time signals of finite<br />
duration. Such a way, a discrete-time signal (sequence)<br />
n<br />
f fttm can be expressed as<br />
f f z f z f z f f z f z f z<br />
m m1 1 2 n<br />
m m1 1 0 1 2<br />
n<br />
1<br />
where the powers of z actually serve as position makers. In particular, polynomials<br />
1<br />
in z describe causal signals that are zero <strong>for</strong> t 0 while polynomials in z<br />
represent anti-causal signals zero <strong>for</strong> t 0 .<br />
1<br />
Every polynomial f in z or z bears a sampling period h. Unless specified<br />
otherwise, default sampling period is used which equals 1. <strong>The</strong> sampling period can<br />
be accessed and changed by function props<br />
f=1+0.5*zi<br />
f =<br />
props(f)<br />
1 + 0.5z^-1<br />
POLYNOMIAL MATRIX<br />
size 1-by-1<br />
degree 1<br />
PROPERTY NAME: CURRENT VALUE: AVAILABLE VALUES:<br />
variable symbol z^-1 's','p','z^-1','d','z','q'<br />
sampling period 1 0 <strong>for</strong> c-time, nonneg or [] <strong>for</strong> d-time<br />
user data [] arbitrary<br />
props(f,2)<br />
POLYNOMIAL MATRIX<br />
size 1-by-1<br />
degree 1<br />
PROPERTY NAME: CURRENT VALUE: AVAILABLE VALUES:<br />
variable symbol z^-1 's','p','z^-1','d','z','q'<br />
sampling period 2 0 <strong>for</strong> c-time, nonneg or [] <strong>for</strong> d-time<br />
user data [] arbitrary<br />
or, better by a structure-like notation using simply f.h<br />
f=1+0.5*zi<br />
68
Resampling of<br />
polynomials in<br />
z -1<br />
Resampling<br />
and phase<br />
f =<br />
f.h<br />
ans =<br />
f.h=2<br />
f =<br />
f.h<br />
ans =<br />
1 + 0.5z^-1<br />
1<br />
1 + 0.5z^-1<br />
2<br />
In arithmetic operations and polynomial equations solvers, all input polynomias must<br />
be of the same sampling period; otherwise a warning message is issued.<br />
1<br />
<strong>Polynomial</strong>s in z represent, by means of z-trans<strong>for</strong>m, causal discrete-time signals<br />
of finite duration. <strong>The</strong>y may, or may not, be considered as a result of sampling of<br />
continuous-time signals. Despite of that, they can be "resampled" . For example,<br />
resampling by 3 means taking only every third sample<br />
f=1+0.9*zi+0.8*zi^2+0.7*zi^3+0.6*zi^4+0.5*zi^5+0.4*zi^6+0.3*<br />
zi^7+0.2*zi^8<br />
f =<br />
1 + 0.9z^-1 + 0.8z^-2 + 0.7z^-3 + 0.6z^-4 + 0.5z^-5 +<br />
0.4z^-6 + 0.3z^-7 + 0.2z^-8<br />
g=resamp(f,3)<br />
g =<br />
g.h<br />
ans =<br />
1 + 0.7z^-1 + 0.4z^-2<br />
3<br />
1<br />
In the resulting g, the indeterminate z still means the delay operator but the<br />
delay, when measured in continuous-time units, is three times longer than that <strong>for</strong> f.<br />
It is captured by the fact that g.h is three times f.h .<br />
Resampling with ratio 3 can be per<strong>for</strong>med with "phase" 0,1 or 2, the default being 0:<br />
g0=resamp(f,3,0)<br />
g0 =<br />
1 + 0.7z^-1 + 0.4z^-2<br />
g1=resamp(f,3,1)<br />
g1 =<br />
0.9 + 0.6z^-1 + 0.3z^-2<br />
g2=resamp(f,3,2)<br />
g2 =<br />
0.8 + 0.5z^-1 + 0.2z^-2<br />
69
Resampling of<br />
two-sided<br />
polynomials<br />
Resampling of<br />
polynomials<br />
in z<br />
Dilating<br />
Two-sided polynomials are considered as signals in discrete time t which may be<br />
nonzero both <strong>for</strong> t 0 and t 0 . According to the usual habit of z-trans<strong>for</strong>m, the<br />
1<br />
ordering is according to powers of z . This is followed in resampling of two-sided<br />
polynomials with a ratio and a phase<br />
T=1.3*z^3+1.2*z^2+1.1*z+1+0.9*zi+0.8*zi^2+0.7*zi^3<br />
T =<br />
0.7z^-3 + 0.8z^-2 + 0.9z^-1 + 1 + 1.1z + 1.2z^2 +<br />
1.3z^3<br />
resamp(T,3)<br />
ans =<br />
0.7z^-1 + 1 + 1.3z<br />
resamp(T,3,1)<br />
ans =<br />
0.9 + 1.2z<br />
resamp(T,3,2)<br />
ans =<br />
0.8 + 1.1z<br />
<strong>Polynomial</strong>s in z are considered as discrete-time signals <strong>for</strong> t 0 (anticausal). To be<br />
compatible with the above described cases, their resampling follows<br />
W=1.5*z^5+1.4*z^4+1.3*z^3+1.2*z^2+1.1*z+1<br />
W =<br />
1 + 1.1z + 1.2z^2 + 1.3z^3 + 1.4z^4 + 1.5z^5<br />
resamp(W,3)<br />
ans =<br />
1 + 1.3z<br />
resamp(W,3,1)<br />
ans =<br />
1.2z + 1.5z^2<br />
resamp(W,3,2)<br />
ans =<br />
1.1z + 1.4z^2<br />
Dilating is a process in some sense inverse to resampling. With<br />
g0,g1,g2<br />
g0 =<br />
g1 =<br />
g2 =<br />
1 + 0.7z^-1 + 0.4z^-2<br />
0.9 + 0.6z^-1 + 0.3z^-2<br />
0.8 + 0.5z^-1 + 0.2z^-2<br />
70
computed be<strong>for</strong>e we have<br />
and<br />
f0=dilate(g0,3,0)<br />
f0 =<br />
1 + 0.7z^-3 + 0.4z^-6<br />
f1=dilate(g0,3,1)<br />
f1 =<br />
z^-1 + 0.7z^-4 + 0.4z^-7<br />
f2=dilate(g0,3,2)<br />
f2 =<br />
f=f0+f1+f2<br />
f =<br />
z^-2 + 0.7z^-5 + 0.4z^-8<br />
1 + z^-1 + z^-2 + 0.7z^-3 + 0.7z^-4 + 0.7z^-5 + 0.4z^-6<br />
+ 0.4z^-7 + 0.4z^-8<br />
restores the original f . Note that<br />
g0.h,f.h<br />
ans =<br />
ans =<br />
3<br />
1<br />
71
4 <strong>The</strong> <strong>Polynomial</strong> Matrix Editor<br />
Introduction<br />
Quick start<br />
<strong>The</strong> <strong>Polynomial</strong> Matrix Editor (PME) is recommended <strong>for</strong> creating and editing<br />
polynomial and standard <strong>MATLAB</strong> matrices of medium to large size, say from about<br />
4 4 to30 35.<br />
Matrices of smaller size can easily be handled in the <strong>MATLAB</strong><br />
command window with the help of monomial functions, overloaded concatenation,<br />
and various applications of subscripting and subassigning. On the other hand,<br />
opening a matrix larger than 30 35 in the PME results in a window that is difficult<br />
to read.<br />
Type pme to open the main window called <strong>Polynomial</strong> Matrix Editor. This window<br />
displays all polynomial matrices (POL objects) and all standard <strong>MATLAB</strong> matrices (2dimensional<br />
DOUBLE arrays) that exist in the main <strong>MATLAB</strong> workspace. It also<br />
allows you to create a new polynomial or standard <strong>MATLAB</strong> matrix. In the <strong>Polynomial</strong><br />
Matrix Editor window you can<br />
create a new polynomial matrix, by typing its name and size in the first (editable)<br />
line and then clicking the Open button<br />
modify an existing polynomial matrix while retaining its size and other<br />
properties: To do this just find the matrix name in the list and then double click<br />
the particular row<br />
modify an existing polynomial matrix to a large extent (<strong>for</strong> instance by changing<br />
also its name, size, variable symbol, etc.): To do this, first find the matrix name in<br />
the list and then click on the corresponding row to move it up to the editable row.<br />
Next type in the new required properties and finally click Open<br />
Each of these actions opens another window called Matrix Pad that serves <strong>for</strong> editing<br />
the particular matrix.<br />
In the Matrix Pad window the matrix entries are displayed as boxes. If an entry is too<br />
long so that it cannot be completely displayed then the corresponding box takes a<br />
slightly different color (usually more pinkish.)<br />
To edit an entry just click on its box. <strong>The</strong> box becomes editable and large enough to<br />
display its entire content. You can type into the box anything that complies with the<br />
<strong>MATLAB</strong> syntax and that results in a scalar polynomial or constant. Of course you can<br />
use existing <strong>MATLAB</strong> variables, functions, etc. <strong>The</strong> program is even more intelligent<br />
and handles some notation going beyond the <strong>MATLAB</strong> syntax. For example, you can<br />
drop the * (times) operator between a coefficient and the related polynomial symbol<br />
(provided that the coefficient comes first). Thus, you can type 2s as well as 2*s.<br />
72
Main window<br />
Main window<br />
buttons<br />
Main window<br />
menus<br />
To complete editing the entry push the Enter key or close the box by using the mouse.<br />
If you have entered an expression that cannot be processed then an error message is<br />
reported and the original box content is recovered.<br />
To bring the newly created or modified matrix into the <strong>MATLAB</strong> workspace finally<br />
click Save or Save As.<br />
<strong>The</strong> main <strong>Polynomial</strong> Matrix Editor window is shown in Fig. 2.<br />
<strong>The</strong> main window contains the following buttons:<br />
Open Clicking the Open button opens a new Matrix Pad <strong>for</strong> the matrix specified<br />
by the first (editable) line. At most four matrix pads can be open at the same<br />
time.<br />
Refresh Clicking the Refresh button updates the list of matrices to reflect<br />
changes in the workspace since opening the PME or since it was last refreshed.<br />
Close Clicking the Close button terminates the current PME session.<br />
<strong>The</strong> main PME window offers the following menus:<br />
Menu List Using the menu List you can set the type of matrices listed in the main<br />
window. <strong>The</strong>y may be polynomial matrices (POL objects) – default – or standard<br />
<strong>MATLAB</strong> matrices or both at the same time.<br />
Menu Symbol Using the menu Symbol you can choose the symbol (such as s,z,...)<br />
that is by default offered <strong>for</strong> the 'New' matrix item of the list.<br />
List menu to<br />
control what<br />
matrices are<br />
displayed<br />
Editable row to<br />
create or modify<br />
properties<br />
List of existing<br />
matrices<br />
Button to open<br />
new matrix pad<br />
Symbol menu to<br />
change default<br />
symbol<br />
Fig. 2. Main window of the <strong>Polynomial</strong> Matrix Editor<br />
73<br />
Properties of<br />
edited matrix<br />
Button to close<br />
PME
Matrix Pad window<br />
Matrix Pad<br />
buttons<br />
To edit a matrix use one of the ways described above to open a Matrix Pad window <strong>for</strong><br />
it. This window is show in Fig. 3.<br />
<strong>The</strong> Matrix Pad window consists of boxes <strong>for</strong> the entries of the polynomial matrix. If<br />
some entry is too long and cannot be completely displayed in the particular box then<br />
its box takes a slightly different color.<br />
To edit an entry you can click on its box. <strong>The</strong> box pops up, becomes editable and large<br />
enough to display its whole content. You can type into the box anything that complies<br />
with the <strong>MATLAB</strong> syntax and results in a scalar polynomial or constant. Of course,<br />
you can use all existing <strong>MATLAB</strong> variables, functions, etc. <strong>The</strong> editor is even a little<br />
more intelligent: it handles some notation going beyond the <strong>MATLAB</strong> syntax. For<br />
example, you can drop the * (times) operator between a coefficient and the related<br />
polynomial symbol (provided that the coefficient comes first). Thus, you can type 2s<br />
as well as 2*s. Experiment a little to learn more.<br />
When editing is completed push the Enter key on your keyboard or close the box by<br />
mouse. If you have entered an expression that cannot be processed then an error<br />
message is appears and the original content of the box is retrieved.<br />
When the matrix is ready bring it into the <strong>MATLAB</strong> workspace by clicking either the<br />
Save or the Save As button.<br />
<strong>The</strong> following buttons control the Matrix Pad window:<br />
Save button: Clicking Save copies the Matrix Pad contents into the <strong>MATLAB</strong><br />
workspace.<br />
Save As button: Clicking Save As copies the matrix from Matrix Pad into the<br />
<strong>MATLAB</strong> workspace under another name.<br />
Browse button: Clicking Browse moves the cursor to the main window (the same<br />
as directly clicking there).<br />
Close Button: Clicking Close closes the Matrix Pad window.<br />
<strong>The</strong> Matrix Pad window consists of boxes <strong>for</strong> the entries of the polynomial matrix. If<br />
some entry is too long and cannot be completely displayed then its box takes a<br />
slightly different color.<br />
To edit an entry you can click on its box. <strong>The</strong> box pops up, becomes editable and large<br />
enough to display its whole content. You can type into the box anything that complies<br />
with the <strong>MATLAB</strong> syntax and results in a scalar polynomial or constant. Of course,<br />
you can use all existing <strong>MATLAB</strong> variables, functions, etc. <strong>The</strong> editor is even a little<br />
more intelligent: it handles some notation going beyond the <strong>MATLAB</strong> syntax. For<br />
example, you can drop the * (times) operator between a coefficient and the related<br />
polynomial symbol (provided that the coefficient comes first). Thus, you can type 2s<br />
as well as 2*s. Experiment a little to learn more.<br />
When editing is completed push the Enter key on your keyboard or close the box by<br />
mouse. If you have entered an expression that cannot be processed then an error<br />
message is appears and the original content of the box is retrieved.<br />
74
When the matrix is ready bring it into the <strong>MATLAB</strong> workspace by clicking either the<br />
Save or the Save As button.<br />
Box with<br />
completely<br />
displayed<br />
contents<br />
Box with<br />
incompletely<br />
displayed<br />
contents<br />
Buttons to save<br />
the matrix<br />
Name of edited<br />
matrix<br />
Button to<br />
activate the<br />
main window<br />
Fig. 3. Matrix Pad with an opened editable box<br />
75<br />
Properties of<br />
edited matrix<br />
Opened editable<br />
box<br />
Button to close<br />
Matrix Pad
5 <strong>Polynomial</strong> matrix fractions<br />
Introduction<br />
In some point of view, polynomials behave like integer numbers: they can be added<br />
and multiplied but generally not divided. Instead of that, the concepts of divisibility,<br />
greatest common divisors and last common multiples are constructed.<br />
To be able to per<strong>for</strong>m division of integer numbers, we create new objects: fractions<br />
like1 2,2 3 , etc. Similarly, polynomial fractions and polynomial matrix fractions are<br />
created.<br />
In this chapter, matrix polynomial fractions are introduced and operations on them<br />
are described. To satisfy as many needs of control engineers and theorists as possible,<br />
four kinds of fractions are implemented in the <strong>Polynomial</strong> <strong>Toolbox</strong>: scalardenominator-fraction<br />
(sdf), matrix-denominator-fraction (mdf), left-denominatorfraction<br />
(ldf) and right-denominator-fraction (rdf). All of them are nothing more<br />
than various <strong>for</strong>ms of the same mathematical entities (fractions); each of them can be<br />
converted to each other.<br />
In systems, signals ad control, the polynomial matrix fractions are typically used to<br />
describe special rational transfer matrices. For more details, see Chapter 5, LTI<br />
systems.<br />
<strong>The</strong> following general rule is adopted <strong>for</strong> all kinds of fractions. <strong>The</strong> standard <strong>MATLAB</strong><br />
matrices, polynomials and two-sided polynomials may contain Inf's or NaN's in their<br />
entries. <strong>The</strong> fractions, however, may not. Moreover, the denominators must not be<br />
zero or singular, as will be explained later <strong>for</strong> individual kinds of matrix polynomial<br />
fractions.<br />
Scalar-denominator-fractions<br />
<strong>The</strong>se objects have a matrix numerator and a scalar denominator. <strong>The</strong>y usually arise<br />
as inverses of (square nonsingular) polynomial matrices:<br />
P=[1 s s^2; 1+s s 1-s;0 -1 -s]<br />
P =<br />
F=inv(P)<br />
F =<br />
1 s s^2<br />
1 + s s 1 - s<br />
0 -1 -s<br />
-1 + s + s^2 0 -1s + s^2 + s^3<br />
-1s - s^2 s 1 - s - s^2 - s^3<br />
1 + s -1 s^2<br />
----------------------------------------<br />
-1 + s + s^2<br />
76
<strong>The</strong> numerator and the denominator can be accessed or changed by F.num and F.den<br />
F.num<br />
ans =<br />
F.den<br />
ans =<br />
-1 + s + s^2 0 -1s + s^2 + s^3<br />
-1s - s^2 s 1 - s - s^2 - s^3<br />
1 + s -1 s^2<br />
-1 + s + s^2<br />
<strong>The</strong>y are also returned by special functions NUM and DEN:<br />
num(F),den(F)<br />
ans =<br />
ans =<br />
-1 + s + s^2 0 -s + s^2 + s^3<br />
-s - s^2 s 1 - s - s^2 - s^3<br />
1 + s -1 s^2<br />
-1 + s + s^2<br />
Recall that F.n and F.d are, up to a constant multiple, equal to adj(P) and det(P)<br />
adj(P)<br />
ans =<br />
det(P)<br />
ans =<br />
1 - s - s^2 0 s - s^2 - s^3<br />
s + s^2 -s -1 + s + s^2 + s^3<br />
-1 - s 1 -1s^2<br />
1 - s - s^2<br />
Sdf objects also arise as negative powers of polynomial matrices<br />
Q=[1 1;1-s s]<br />
Q =<br />
Q^-1<br />
ans =<br />
Q^-2<br />
1 1<br />
1 - s s<br />
0.5s -0.5<br />
-0.5 + 0.5s 0.5<br />
-------------------<br />
-0.5 + s<br />
77
<strong>The</strong> sdf<br />
command<br />
Comparison of<br />
fractions<br />
ans =<br />
0.25 - 0.25s + 0.25s^2 -0.25 - 0.25s<br />
-0.25 + 0.25s^2 0.5 - 0.25s<br />
----------------------------------------<br />
and alike. It can be verified that<br />
Q*Q^-1<br />
ans =<br />
Q^2*Q^-2<br />
ans =<br />
and so on.<br />
1 0<br />
0 1<br />
1 0<br />
0 1<br />
0.25 - s + s^2<br />
Scalar-denominator-fractions may also be entered in terms of their numerator<br />
matrices and denominator polynomials. For this purpose the sdf command is<br />
available. Typing<br />
N = [-1+s+s^2 0 -s+s^2+s^3;-s-s^2 s 1-s-s^2-s^3;1+s -1 s^2];<br />
d = -1+s+s^2;<br />
sdf(N,d)<br />
<strong>for</strong> instance, defines the scalar-denominator-fraction<br />
ans =<br />
-1 + s + s^2 0 -s + s^2 + s^3<br />
-s - s^2 s 1 - s - s^2 - s^3<br />
1 + s -1 s^2<br />
----------------------------------------<br />
-1 + s + s^2<br />
Note that <strong>for</strong> two fractions to be equal it is not necessary to have the same numerator<br />
and the same denominator. Common polynomial factors may be present:<br />
G=(s-1)*inv(s^2-1)<br />
G =<br />
-1 + s<br />
--------<br />
-1 + s^2<br />
H=inv(1+s)<br />
H =<br />
1<br />
-----<br />
1 + s<br />
78
Coprime<br />
Reduce<br />
General<br />
properties<br />
cop, red<br />
G==H<br />
ans =<br />
1<br />
Every fraction can be converted to the coprime case<br />
K=coprime(G)<br />
K =<br />
1<br />
-----<br />
1 + s<br />
If one does not care about common factors and only needs putting the leading (or<br />
trailing) denominator coefficient to unity, the fraction may be reduced<br />
Nct=sdf(1,2+3*s)<br />
Nct =<br />
1<br />
------<br />
2 + 3s<br />
reduce(Nct)<br />
ans =<br />
0.33<br />
--------<br />
0.67 + s<br />
Ndt=sdf(1,2+3*zi)<br />
Ndt =<br />
1<br />
---------<br />
2 + 3z^-1<br />
reduce(Ndt)<br />
ans =<br />
0.5<br />
-----------<br />
1 + 1.5z^-1<br />
Of course, an equal fraction must results so that only one of the denominator<br />
coefficients can be made unity depending on the indeterminate.<br />
Results of operations with fractions are generally not guaranteed to be coprime and<br />
reduced, even if the operands are. It depends on algorithm used. Sometimes it is<br />
convenient to require such cancelling and reducing after every operation; it can be<br />
79
Reverse<br />
managed by switching the <strong>Polynomial</strong> <strong>Toolbox</strong> into a special mode of operation that is<br />
achieved by setting the general properties<br />
gprops('cop','red')<br />
<strong>The</strong> default state is restored by<br />
gprops('ncop','nred')<br />
To display the status (together with all other general properties), type<br />
gprops<br />
Global polynomial properties:<br />
PROPERTY NAME: CURRENT VALUE: AVAILABLE VALUES:<br />
variable symbol s 's','p','z^-1','d','z','q'<br />
zeroing tolerance 1e-008 any real number from 0 to 1<br />
verbose level no 'no', 'yes'<br />
display <strong>for</strong>mat nice 'symb', 'symbs', 'nice', 'symbr',<br />
'block'<br />
80<br />
'coef', 'rcoef', 'rootr', 'rootc'<br />
display order normal 'normal', 'reverse'<br />
coprime flag ncop 'cop', 'ncop'<br />
reduce flag nred 'red', 'nred'<br />
defract flag ndefr 'defr', 'ndefr'<br />
discrete variable disz 'disz', 'disz^-1'<br />
continuous variable conts 'conts', 'contp'<br />
This option should be used carefully as it increases the computation time. <strong>The</strong> test <strong>for</strong><br />
coprimeness is influenced by the zeroing tolerance globally<br />
or locally<br />
tol=1.e-5;<br />
gprops(tol)<br />
coprime(G,tol);<br />
<strong>The</strong> same fraction may be expressed both by z and by<br />
other, reverse command is used:<br />
Q=sdf(2-zi,1-.9*zi)<br />
Q =<br />
2 - z^-1<br />
-----------<br />
1 - 0.9z^-1<br />
R=reverse(Q)<br />
R =<br />
Q==R<br />
-1 + 2z<br />
--------<br />
-0.9 + z<br />
1<br />
z . To convert each <strong>for</strong>m to
ans =<br />
1<br />
1<br />
Be careful with using reverse with fractions in other operators than z or z . So <strong>for</strong><br />
1<br />
a fraction in s , as no polynomials in s exist, the result is again in s , and Q==R no<br />
more holds:<br />
Q=(2+s)*inv(3+s)<br />
Q =<br />
2 + s<br />
-----<br />
3 + s<br />
R=reverse(Q)<br />
R =<br />
Q==R<br />
ans =<br />
1 + 2s<br />
------<br />
1 + 3s<br />
Matrix-denominator-fractions<br />
0<br />
<strong>The</strong>se objects have both matrix numerator and matrix denominator and are<br />
especially convenient <strong>for</strong> rational matrices as one can see separately numerators of<br />
all rational entries in the matrix numerator and similarly all particular<br />
denominators in the denominator matrix.<br />
<strong>The</strong>y usually arise from array (entry-by-entry) division of polynomial matrices:<br />
N=[1,1;s,s-1],D=[s s-1; 2 s+1]<br />
N =<br />
D =<br />
F=N./D<br />
F =<br />
1 1<br />
s -1 + s<br />
s -1 + s<br />
2 1 + s<br />
1 1<br />
- ------<br />
s -1 + s<br />
81
<strong>The</strong> mdf<br />
command<br />
-1 + s<br />
0.5s ------<br />
1 + s<br />
<strong>The</strong> numerator and the denominator can be accessed or changed by F.num and F.den<br />
F.num,F.den<br />
ans =<br />
ans =<br />
1 1<br />
0.5s -1 + s<br />
s -1 + s<br />
1 1 + s<br />
Scalar-denominator-fractions may also be entered in terms of their numerator<br />
matrices<br />
F = mdf([s+2 s+3], [s+3,s+1])<br />
F =<br />
2 + s 3 + s<br />
----- -----<br />
3 + s 1 + s<br />
Needless to say that all entries of this denominator matrix must be nonzero. All<br />
functions described above <strong>for</strong> sdf work <strong>for</strong> mdf as well.<br />
Left and right polynomial matrix fractions<br />
Mathematicians sometimes like to describe matrices of fractions (rational matrices)<br />
as fractions of two polynomial matrices. So the 2x3 rational matrix<br />
2<br />
s s<br />
1 <br />
<br />
Rs () <br />
1s 1s 1s<br />
<br />
1 1 1 <br />
1s 1s 1s<br />
can either be described by the polynomial matrices<br />
as the so called right matrix fraction<br />
or by other two polynomial matrices<br />
1s00 2<br />
s s<br />
1<br />
Ar( s) <br />
<br />
0 1 s 0<br />
<br />
<br />
<br />
<br />
, Br( s)<br />
<br />
1 1 1<br />
0 0 1<br />
s<br />
<br />
<br />
R( s) B ( s) A ( s) <br />
<br />
82<br />
r r<br />
1
Rightdenominatorfraction<br />
<strong>The</strong> rdf<br />
command<br />
as the so called left matrix fraction<br />
1 s s 0 1<br />
Al( s) , Bl( s)<br />
<br />
0 1 s<br />
<br />
1 1 1<br />
<br />
<br />
R s A s B s .<br />
1<br />
( ) l( ) l(<br />
)<br />
Such fractions polynomial matrix fractions are now often used in automatic control<br />
and other engineering fields. To handle the right and the left polynomial matrix<br />
fractions, <strong>Polynomial</strong> <strong>Toolbox</strong> is equipped with two objects, rdf and ldf. <strong>The</strong>se<br />
objects have both matrix numerator and matrix denominator (the right one or the left<br />
one).<br />
<strong>The</strong>se objects are usually entered using standard Matlab division operator (slash)<br />
Ar=[1+s,0,0;0,1+s,0;0,0,1+s],Br=[s^2,-s,1;1,1,1]<br />
Ar =<br />
Br =<br />
R=Br/Ar<br />
R =<br />
1 + s 0 0<br />
0 1 + s 0<br />
0 0 1 + s<br />
s^2 -s 1<br />
1 1 1<br />
s^2 -s 1 / 1 + s 0 0<br />
1 1 1 / 0 1 + s 0<br />
/ 0 0 1 + s<br />
Right-denominator-fractions may also be entered in terms of their numerator and<br />
right denominator matrices<br />
F = rdf(Br,Ar)<br />
F =<br />
s^2 -s 1 / 1 + s 0 0<br />
1 1 1 / 0 1 + s 0<br />
/ 0 0 1 + s<br />
Note the order of input arguments that come as they appear in the right fraction.<br />
Needless to say that here the denominator matrix must be here nonzero but maz<br />
eventually possess zero entries.<br />
83
Leftdenominatorfraction<br />
<strong>The</strong> ldf<br />
command<br />
<strong>The</strong>se objects are usually entered using standard Matlab left division operator<br />
(backslash)<br />
Al=[1 s;0 s+1],Bl=[s 0 1; 1 1 1],G=Al\Bl<br />
Al =<br />
Bl =<br />
G =<br />
1 s<br />
0 1 + s<br />
s 0 1<br />
1 1 1<br />
1 s \ s 0 1<br />
0 1 + s \ 1 1 1<br />
Left-denominator-fractions may also be entered in terms of their numerator and left<br />
denominator matrices<br />
>> G = ldf(Al,Bl)<br />
G =<br />
1 s \ s 0 1<br />
0 1 + s \ 1 1 1<br />
Note the order of input arguments, which come as they appear in the left fraction.<br />
Needless to say that here the denominator matrix must be here nonzero but may<br />
eventually possess zero entries.<br />
Mutual conversions of polynomial matrix fraction objects<br />
<strong>Polynomial</strong> matrix fraction objects can be mutually converted by means of their basic<br />
constructors sdf, mdf, rdf, and ldf. For example,<br />
N = [-1+s 0 -s+s^3;-s-s^2 s 1-s]; d = -1+s+s^2;<br />
Gsdf=sdf(N,d)<br />
Gsdf =<br />
-1 + s 0 -s + s^3<br />
-s - s^2 s 1 - s<br />
--------------------------<br />
>> class(Gsdf)<br />
ans =<br />
sdf<br />
-1 + s + s^2<br />
can be converted as follows<br />
84
Gmdf=mdf(Gsdf)<br />
Gmdf =<br />
-1 + s 0 -s + s^3<br />
------------ ------------ ------------<br />
-1 + s + s^2 -1 + s + s^2 -1 + s + s^2<br />
-s - s^2 s 1 - s<br />
------------ ------------ ------------<br />
-1 + s + s^2 -1 + s + s^2 -1 + s + s^2<br />
class(Gmdf)<br />
ans =<br />
mdf<br />
Grdf=rdf(Gsdf)<br />
Grdf.numerator =<br />
-1 + s 0 -s + s^3<br />
-s - s^2 s 1 - s<br />
Grdf.denominator =<br />
-1 + s + s^2 0 0<br />
0 -1 + s + s^2 0<br />
0 0 -1 + s + s^2<br />
class(Grdf)<br />
ans =<br />
rdf<br />
Gldf=ldf(Gsdf)<br />
Gldf =<br />
-1 + s + s^2 0 \ -1 + s 0 -s + s^3<br />
0 -1 + s + s^2 \ -s - s^2 s 1 - s<br />
class(Gldf)<br />
ans =<br />
ldf<br />
Of course, the real entity behind remains the same so that the "contents" of all the<br />
various expressions equal<br />
Gsdf==Gmdf<br />
ans =<br />
1 1 1<br />
1 1 1<br />
85
Gmdf==Grdf<br />
ans =<br />
Grdf==Gldf<br />
ans =<br />
1 1 1<br />
1 1 1<br />
1 1 1<br />
1 1 1<br />
<strong>The</strong> user should keep in mind, however, that the conversions above may sometimes<br />
require heavy commutations and can there<strong>for</strong>e be both time-consuming and subject<br />
numerical inaccuracies. Definitely, they should be avoided if possible.<br />
Operations with polynomial matrix fractions<br />
Addition,<br />
subtraction and<br />
multiplication<br />
In the <strong>Polynomial</strong> <strong>Toolbox</strong> polynomial matrix fractions are objects <strong>for</strong> which all<br />
standard operations are defined. Different object types may usually enter an<br />
operation.<br />
Define the polynomial fractions<br />
F=s^-1,G=s/(1+s)<br />
F =<br />
G =<br />
1<br />
-<br />
s<br />
s / 1 + s<br />
<strong>The</strong> sum and product follow easily:<br />
F+G<br />
ans =<br />
G+F<br />
ans =<br />
F*G<br />
1 + s + s^2<br />
-----------<br />
s + s^2<br />
1 + s + s^2 / s + s^2<br />
86
ans =<br />
G*F<br />
ans =<br />
s<br />
-------<br />
s + s^2<br />
s / s + s^2<br />
For polynomial matrix fractions<br />
a=1./s;b=1./(s+1);F=[1 a; a+b b],G=[b a;a b]<br />
F =<br />
G =<br />
1<br />
1 -<br />
s<br />
1 + 2s 1<br />
------- -----<br />
s + s^2 1 + s<br />
1 1<br />
----- -<br />
1 + s s<br />
1 1<br />
- -----<br />
s 1 + s<br />
the operations work as expected<br />
F+G<br />
ans =<br />
F*G<br />
2 + s 2<br />
----- -<br />
1 + s s<br />
2 + 3s 2<br />
------- -----<br />
s + s^2 1 + s<br />
87
Entrywise and<br />
matrix division<br />
ans =<br />
1 + s + s^2 2 + s<br />
----------- -------<br />
s^2 + s^3 s + s^2<br />
2 + 3s 1 + 3s + 3s^2<br />
-------------- ----------------<br />
s + 2s^2 + s^3 s^2 + 2s^3 + s^4<br />
Entrywise division of polynomial matrix fractions<br />
T = F./G<br />
results in another fraction<br />
ans =<br />
s<br />
1 + s -<br />
s<br />
s + 2s^2 1 + s<br />
-------- -----<br />
s + s^2 1 + s<br />
while matrix division yields<br />
F/G<br />
ans =<br />
F\G<br />
ans =<br />
0.5s + s^2 - 0.5s^4 0.5s^3 + 0.5s^4<br />
------------------- ---------------<br />
0.5s + s^2 0.5s + s^2<br />
-0.5s^3 - 0.5s^4 0.5s + 2s^2 + 2s^3 + 0.5s^4<br />
------------------- ---------------------------<br />
0.5s + 1.5s^2 + s^3 0.5s + 1.5s^2 + s^3<br />
-1s - 2s^2 0<br />
--------------------- ---------------------<br />
-s - 3s^2 - s^3 + s^4 -s - 3s^2 - s^3 + s^4<br />
88
Concatenation<br />
and working<br />
with<br />
submatrices<br />
s^4 -1s - 3s^2 - s^3 + s^4<br />
--------------------- ----------------------<br />
-s - 3s^2 - s^3 + s^4 -1s - 3s^2 - s^3 + s^4<br />
All standard <strong>MATLAB</strong> operations to concatenate matrices or selecting submatrices<br />
also apply to polynomial matrix fractions. Typing<br />
W = [F G]<br />
results in<br />
W =<br />
1 1 1<br />
1 - ----- -<br />
s 1 + s s<br />
1 + 2s 1 1 1<br />
------- ----- - -----<br />
s + s^2 1 + s s 1 + s<br />
<strong>The</strong> last row of W may be selected by typing<br />
w = W(2,:)<br />
w =<br />
3 4 s^3 1<br />
Submatrices may be assigned values by commands such as<br />
w =<br />
1 + 2s 1 1 1<br />
------- ----- - -----<br />
s + s^2 1 + s s 1 + s<br />
<strong>The</strong> user should be aware that, depending on fraction type used, even concatenation<br />
and submatrix selection may require extensive computing when working with<br />
fraction<br />
L=[s 1; 1 s+1]\[1;1]<br />
L =<br />
L(1)<br />
ans =<br />
s 1 \ 1<br />
1 1 + s \ 1<br />
-1 + s + s^2 \ s<br />
89
Coefficients and<br />
coefficient<br />
matrices<br />
Conjugation<br />
and<br />
transposition<br />
Coefficient of a fraction is defined as the coefficient in its Laurent series<br />
>> l=(z-1)^-1<br />
l =<br />
1<br />
------<br />
-1 + z<br />
>> l{-1}<br />
ans =<br />
1<br />
>> l{-2}<br />
ans =<br />
1<br />
>> l{-2:0}<br />
ans =<br />
1 1 0<br />
Conjugation and transposition work as expected <strong>for</strong> fractions<br />
T=[a b;a 0]<br />
T =<br />
T'<br />
1 1<br />
- -----<br />
s 1 + s<br />
1<br />
- 0<br />
s<br />
ans =<br />
T.'<br />
ans =<br />
1 1<br />
-- --<br />
-s -s<br />
1<br />
----- 0<br />
1 - s<br />
90
Signals and trans<strong>for</strong>ms<br />
1 1<br />
- -<br />
s s<br />
1<br />
----- 0<br />
1 + s<br />
Sampling, unsampling and resampling<br />
<strong>Polynomial</strong> matrix fractions, transfer functions and transfer function matrices<br />
Transfer<br />
functions<br />
To describe transfer function of a single-input single-output linear (SISO) time<br />
invariant system or transfer function matrix of a multi-input and/or multi-output<br />
linear time invariant system, any of the four polynomial matrix fraction objects, sdf,<br />
mdf, ldf or rdf, can be used. <strong>The</strong> particular object choice may depend of the <strong>for</strong>m of<br />
input data, on a task to be solved or operations to be per<strong>for</strong>med or just on a personal<br />
preference in displaying the polynomial fractions. As there are numerous possibilities<br />
to create a proper object <strong>for</strong> a given transfer function, we will just show several<br />
examples.<br />
SISO system transfer function is usually given in the <strong>for</strong>m of two polynomials, one <strong>for</strong><br />
the numerator and one <strong>for</strong> the denominator. Typically, these polynomials are entered<br />
first and then used to create a fraction. So all one must do to <strong>for</strong>m the transfer<br />
function<br />
is to type<br />
a=1+s+2*s^2,b=s-s^2,f1=sdf(b,a)<br />
a =<br />
b =<br />
f1 =<br />
or<br />
1 + s + 2s^2<br />
s - s^2<br />
s - s^2<br />
------------<br />
1 + s + 2s^2<br />
f2=b/a<br />
2<br />
ss f () s <br />
1s2s 91<br />
2
Transfer<br />
function<br />
matrices<br />
or<br />
f2 =<br />
f3=a\b<br />
f3 =<br />
0.5s - 0.5s^2 / 0.5 + 0.5s + s^2<br />
0.5 + 0.5s + s^2 \ 0.5s - 0.5s^2<br />
etc. For simple transfer functions, one can directly type<br />
g=1/s,h=(1-s)\s<br />
g =<br />
h =<br />
1 / s<br />
-1 + s \ -s<br />
Transfer functions of discrete-time systems are created similarly using<br />
polynomials in "discrete-time" indeterminates:<br />
f=sdf(z,1+z)<br />
f =<br />
z<br />
-----<br />
1 + z<br />
g=1/(2-z)<br />
g =<br />
-1 / -2 + z<br />
and alike. For simple transfer functions, one can directly type<br />
Transfer function matrices of MIMO transfer function matrices of MIMO systems<br />
can either be created from individual entries or via two polynomial matrices. So to<br />
create an object <strong>for</strong> the transfer function matrix<br />
one types 3 entry-by-entry<br />
F=[1./s s./(s-1);1 s./(s+1) ]<br />
F =<br />
1 s<br />
1s s s1<br />
Fs () <br />
1<br />
1<br />
<br />
s 1<br />
3 Note how the final matrix object type is defined by the particular choice of the<br />
scalar object types.<br />
92
Transfer<br />
functions from<br />
state space<br />
model and vice<br />
versa<br />
or, equivalently<br />
- ------<br />
s -1 + s<br />
s<br />
1 -----<br />
1 + s<br />
F=[1 s; 1 1]./[s s-1;1 s+1]<br />
F =<br />
1 s<br />
- ------<br />
s -1 + s<br />
1<br />
1 -----<br />
1 + s<br />
Transfer functions of systems given in the state space model can be created<br />
directly by either of the polynomial matrix fraction constructors. So <strong>for</strong><br />
1 x <br />
1<br />
0 1 x u<br />
1<br />
<br />
2<br />
<br />
<br />
1 y <br />
0 0 0 x u<br />
1<br />
<br />
0<br />
<br />
<br />
one first creates the state-space model matrices<br />
A =[-1 0;1 1], B = [1 ;2], C = [1 0;0 1], D =[0; 0]<br />
A =<br />
B =<br />
C =<br />
D =<br />
-1 0<br />
1 1<br />
1<br />
2<br />
1 0<br />
0 1<br />
0<br />
93
0<br />
and then just types<br />
F=mdf(A,B,C,D)<br />
F =<br />
-1 + s<br />
--------<br />
-1 + s^2<br />
3 + 2s<br />
--------<br />
-1 + s^2<br />
When the D matrix is zero, it need not be created and typed in<br />
F=mdf(A,B,C)<br />
F =<br />
-1 + s<br />
--------<br />
-1 + s^2<br />
3 + 2s<br />
--------<br />
-1 + s^2<br />
<strong>The</strong> reversed process of computing state-space matrices from a given transfer<br />
function matrix is accomplished by function abcd:<br />
[a,b,c,d]=abcd(F)<br />
a =<br />
b =<br />
c =<br />
d =<br />
0 1.0000<br />
1.0000 0<br />
1<br />
0<br />
1.0000 -1.0000<br />
2.0000 3.0000<br />
0<br />
0<br />
94
Note that the resulting matrices differ from those above. This process of so called<br />
"state space realization" is not unique at all. <strong>The</strong> function actually returns a<br />
particular realization in so called "observer <strong>for</strong>m."<br />
Another indeterminate, different <strong>for</strong>m s, can be specified when creating transfer<br />
function <strong>for</strong>m the state space matrices. So to <strong>for</strong> a discrete-time system<br />
x<br />
1 <br />
1<br />
0 1<br />
x<br />
<br />
1 u<br />
2<br />
<br />
<br />
1 yk <br />
0 0<br />
xk<br />
1<br />
<br />
<br />
k 1<br />
k k<br />
one creates transfer function matrix in z by<br />
G=mdf(A,B,C,'z')<br />
G =<br />
-1 + z<br />
--------<br />
-1 + z^2<br />
3 + 2z<br />
--------<br />
-1 + z^2<br />
or transfer function matrix in<br />
H=mdf(A,B,C,'zi')<br />
H =<br />
z^-1 - z^-2<br />
-----------<br />
1 - z^-2<br />
2z^-1 + 3z^-2<br />
-------------<br />
1 - z^-2<br />
1<br />
z by<br />
Conversions of polynomial matrix fraction objects to and from other Matlab objects<br />
<strong>Polynomial</strong> <strong>Toolbox</strong> can effectively cooperate with other Matlab toolboxes. In<br />
particular, its objects can directly be converted 4 to the object of Control System<br />
<strong>Toolbox</strong> and Symbolic Math <strong>Toolbox</strong>.<br />
4 Of course, this functionality only works if the other toolbox is installed.<br />
95
Conversion to<br />
the Control<br />
System <strong>Toolbox</strong><br />
Objects<br />
Any polynomial matrix fraction object is directly converted into any Control System<br />
<strong>Toolbox</strong> LTI object via overloaded constructors5 of the Control System <strong>Toolbox</strong>. This<br />
functionality will become clear from the following examples.<br />
So the polynomial fraction<br />
F=s/(s+1)<br />
F =<br />
s / 1 + s<br />
is converted into the state-space object of the Control System <strong>Toolbox</strong><br />
Fss=ss(F)<br />
a =<br />
b =<br />
c =<br />
d =<br />
x1<br />
x1 -1<br />
u1<br />
x1 1<br />
x1<br />
y1 -1<br />
u1<br />
y1 1<br />
Continuous-time model.<br />
or into the transfer function object of the Control System <strong>Toolbox</strong><br />
Ftf=tf(F)<br />
Transfer function:<br />
s<br />
-----<br />
s + 1<br />
5 To learn more about LTI objects, consult the Control Systems <strong>Toolbox</strong> manual or<br />
use Matlab help window.<br />
96
or even into the zero-pole-gain object of the Control System <strong>Toolbox</strong><br />
Fzpk=zpk(F)<br />
Zero/pole/gain:<br />
s<br />
-----<br />
(s+1)<br />
For MIMO systems it works as well<br />
G=mdf(A,B,C)<br />
G =<br />
Gss=ss(G)<br />
a =<br />
b =<br />
c =<br />
d =<br />
-1 + s<br />
--------<br />
-1 + s^2<br />
3 + 2s<br />
--------<br />
-1 + s^2<br />
x1 x2<br />
x1 0 1<br />
x2 1 0<br />
u1<br />
x1 1<br />
x2 0<br />
x1 x2<br />
y1 1 -1<br />
y2 2 3<br />
u1<br />
97
y1 0<br />
y2 0<br />
Continuous-time model.<br />
Gtf=tf(G)<br />
Transfer function from input to output...<br />
s - 1<br />
#1: -------<br />
s^2 - 1<br />
2 s + 3<br />
#2: -------<br />
s^2 - 1<br />
Gzpk=zpk(G)<br />
Zero/pole/gain from input to output...<br />
(s-1)<br />
#1: -----------<br />
(s-1) (s+1)<br />
2 (s+1.5)<br />
#2: -----------<br />
(s-1) (s+1)<br />
During the conversion, one need not care about the discrete/continuous-time nature<br />
as well as other data are handled accordingly<br />
Hss=ss(H)<br />
a =<br />
b =<br />
x1<br />
x1 1<br />
u1<br />
x1 1<br />
98
c =<br />
d =<br />
x1<br />
y1 1<br />
u1<br />
y1 1<br />
Sampling time: 1<br />
Discrete-time model.<br />
Htf=tf(H)<br />
Transfer function:<br />
z<br />
-----<br />
z - 1<br />
Sampling time: 1<br />
Hzpk=zpk(H)<br />
Zero/pole/gain:<br />
z<br />
-----<br />
(z-1)<br />
Sampling time: 1<br />
This unlimited possibility to convert objects allows the <strong>Polynomial</strong> <strong>Toolbox</strong> users<br />
employing almost directly all the numerous functions of the Control Systems <strong>Toolbox</strong>.<br />
Sometimes, one even need not type the convertor at all as the object is converted<br />
automatically. So <strong>for</strong><br />
F=1/s<br />
F =<br />
1 / s<br />
H=s./(s-1)<br />
H =<br />
s<br />
99
Conversion<br />
from the<br />
Control System<br />
<strong>Toolbox</strong><br />
Objects<br />
typing<br />
------<br />
-1 + s<br />
bode(F),nyquist(H),step(F),impulse(H)<br />
returns expected results but<br />
bode(F,H)<br />
??? Error using ==> lti.bode at 95<br />
In frequency response commands, the frequency grid must be a<br />
vector of<br />
positive real numbers.<br />
Error in ==> frac.bode at 16<br />
bode(ss(F),varargin{:});<br />
fails. To be on the safe side, one should always type the convertor name in such as<br />
bode(ss(F),tf(H))<br />
which work well.<br />
Quite naturally, any LTI object of the Control System <strong>Toolbox</strong> can be converted<br />
into a matrix fraction object whenever a convenient object is available6 in the<br />
<strong>Polynomial</strong> <strong>Toolbox</strong>. <strong>The</strong> conversion is accomplished by typing any of the<br />
polynomial matrix fraction constructor's names. For example,<br />
F=1./(s-1);Fss=ss(F)<br />
a =<br />
b =<br />
c =<br />
x1<br />
x1 1<br />
u1<br />
x1 1<br />
x1<br />
y1 1<br />
6 For example, the conversion of a continuous-time system having nonzero time delay<br />
is not possible <strong>for</strong> obvious reasons.<br />
100
d =<br />
u1<br />
y1 0<br />
Continuous-time model.<br />
sdf(Fss)<br />
ans =<br />
1<br />
------<br />
-1 + s<br />
ldf(Fss)<br />
ans =<br />
or<br />
-1 + s \ 1<br />
H = zpk( {[];[2 3]} , {1;[0 -1]} , [-5;1] )<br />
Zero/pole/gain from input to output...<br />
-5<br />
#1: -----<br />
(s-1)<br />
(s-2) (s-3)<br />
#2: -----------<br />
Hmdf=mdf(H)<br />
Hmdf =<br />
-5<br />
------<br />
-1 + s<br />
s (s+1)<br />
6 - 5s + s^2<br />
------------<br />
s + s^2<br />
101
Conversion to<br />
the Symbolic<br />
Math <strong>Toolbox</strong><br />
Hrdf=rdf(H)<br />
Hrdf =<br />
-5s - 5s^2 / -1s + s^3<br />
-6 + 11s - 6s^2 + s^3 /<br />
or, similarly, <strong>for</strong> discrete-time systems<br />
G=z/(z+1);Gtf=tf(G)<br />
Transfer function:<br />
z<br />
-----<br />
z + 1<br />
Sampling time: 1<br />
sdf(Gtf)<br />
ans =<br />
z<br />
-----<br />
1 + z<br />
ldf(Gtf)<br />
ans =<br />
1 + z \ z<br />
<strong>Polynomial</strong>s and polynomial fractions can also converted into the symbolic objects<br />
of the Symbolic Math <strong>Toolbox</strong>. A polynomial such as<br />
p=1+s<br />
p =<br />
1 + s<br />
can be converted by<br />
psym=sym(p)<br />
psym =<br />
s + 1<br />
class(psym)<br />
ans =<br />
sym<br />
To avoid eventual confusion when working with 'polynomial' and 'symbolic'<br />
indeterminates during one Matlab session, one may wish to produce the symbolic<br />
polynomial having a pre-specified indeterminate symbol, usually different <strong>for</strong>m<br />
102
that used in polynomials. For example, if you want to produce a symbolic<br />
polynomial in capital S, just type<br />
pSym=sym(p,'S')<br />
pSym =<br />
S + 1<br />
<strong>Polynomial</strong> fractions can be converted similarly<br />
f=1./p<br />
f =<br />
1<br />
-----<br />
1 + s<br />
fsym=sym(f)<br />
fsym =<br />
1/(s + 1)<br />
class(fsym)<br />
ans =<br />
sym<br />
fSym=sym(f,'S')<br />
fSym =<br />
1/(S + 1)<br />
Everything works as expected even <strong>for</strong> matrices of polynomials<br />
P=[s 1; s+1 0]<br />
P =<br />
s 1<br />
1 + s 0<br />
Psym=sym(P)<br />
Psym =<br />
[ s, 1]<br />
[ s + 1, 0]<br />
Psym=sym(P,'S')<br />
Psym =<br />
[ S, 1]<br />
[ S + 1, 0]<br />
and fractions<br />
F=mdf(1/P)<br />
F =<br />
103
Conversion<br />
from the<br />
Symbolic Math<br />
<strong>Toolbox</strong><br />
0 1<br />
----- -----<br />
1 + s 1 + s<br />
1 + s -1s<br />
----- -----<br />
1 + s 1 + s<br />
Fsym=sym(F)<br />
Fsym =<br />
[ 0, 1/(s + 1)]<br />
[ 1, -s/(s + 1)]<br />
>> FSym=sym(F,'S')<br />
FSym =<br />
[ 0, 1/(S + 1)]<br />
[ 1, -S/(S + 1)]<br />
Backward conversion of symbolic polynomials, polynomial matrices,<br />
polynomial fractions and polynomial matrix fractions into the corresponding<br />
<strong>Polynomial</strong> <strong>Toolbox</strong> object is also possible. However, the source symbolic object<br />
must possess as its indeterminate one of the indeterminates supported by the<br />
<strong>Polynomial</strong> <strong>Toolbox</strong>, that is 's','p','z','q','d','zi','z^-1'. So <strong>for</strong><br />
symbolic objects defined as<br />
syms s z<br />
psym=1+s,qsym=1-z,fsym=1/psym,gsym=1/qsym<br />
psym =<br />
s + 1<br />
qsym =<br />
1 - z<br />
fsym =<br />
1/(s + 1)<br />
gsym =<br />
-1/(z - 1)<br />
are their <strong>Polynomial</strong> <strong>Toolbox</strong> counterparts<br />
pol(psym)<br />
ans =<br />
1 + s<br />
104
pol(qsym)<br />
ans =<br />
1 - z<br />
sdf(fsym)<br />
ans =<br />
1<br />
-----<br />
1 + s<br />
ldf(gsym)<br />
ans =<br />
-1 + z \ -1<br />
Symbolic matrix objects are converted accordingly<br />
class(s)<br />
ans =<br />
sym<br />
pol([s 1])<br />
ans =<br />
s 1<br />
class(ans)<br />
ans =<br />
pol<br />
sdf([1/s s/(1+s); 1 s])<br />
ans =<br />
1 + s s^2<br />
s + s^2 s^2 + s^3<br />
---------------------<br />
s + s^2<br />
mdf([1/s s/(1+s); 1 s])<br />
ans =<br />
1 s<br />
- -----<br />
s 1 + s<br />
1 s<br />
105
class(ans)<br />
ans =<br />
mdf<br />
106
6 Conclusions<br />
This is just a first version of the Manual. Future corrections, modifications and<br />
additions are expected.<br />
107
A<br />
abcd · 65<br />
addition · 10, 22, 57<br />
adj · 24<br />
adjoint · 24<br />
advanced operations and functions · 14<br />
axb · 16<br />
axbyc · 3, 6, 14, 16<br />
B<br />
backslash · 55<br />
basic operations on polynomial matrices · 22<br />
Bézout equations · 15<br />
C<br />
canonical <strong>for</strong>ms · 9<br />
clements · 26<br />
Clements <strong>for</strong>m · 26<br />
coefficient · 61<br />
coefficient matrices · 13, 19, 20<br />
coefficients · 13, 19, 20<br />
column degrees · 9<br />
column reduced · 10<br />
common<br />
divisor · 2<br />
common left multiple · 7<br />
common right divisor · 7<br />
common right multiple · 6<br />
compan · 30<br />
companion matrix · 29<br />
concatenation · 12, 19, 60<br />
Conjugate transpose · 31<br />
conjugation · 13, 62<br />
Conjugation · 61<br />
constant<br />
matrices · 29<br />
Control System <strong>Toolbox</strong> · 66, 71<br />
conversion<br />
polynomial matrix fraction objects · 55<br />
Conversions · 66<br />
cop · 50<br />
coprime · 3<br />
left · 5<br />
Coprime · 50<br />
108<br />
D<br />
default indeterminate variable · 18<br />
deg · 9<br />
degree · 9<br />
deriv · 29<br />
descriptor system · 25<br />
det · 23<br />
determinant · 23<br />
Diophantine equation · 13<br />
Diophantine equations · 13<br />
Discrete-time spectral factorization · 37<br />
display <strong>for</strong>mat · 9<br />
division · 2<br />
entrywise · 59<br />
matrix · 59<br />
with remainder · 2<br />
divisor · 30<br />
right · 7<br />
E<br />
echelon · 12<br />
echelon <strong>for</strong>m · 12<br />
entering polynomial matrices · 16<br />
equation solvers · 17<br />
Euclidean division · 2<br />
F<br />
factor · 2<br />
freeing variables · 18<br />
G<br />
General properties · 50<br />
gensym · 18<br />
gld · 3, 6<br />
greatest common<br />
divisor · 2<br />
left divisor · 5<br />
right divisor · 7<br />
H<br />
help · 8
hermite · 11<br />
Hermite <strong>for</strong>m · 11<br />
how to define a polynomial matrix · 9<br />
hurwitz · 30<br />
Hurwitz matrix · 29<br />
Hurwitz stability · 28<br />
I<br />
indeterminate variables · 16<br />
infinite roots · 27<br />
initialization · 8<br />
integral · 29<br />
invariant polynomials · 13<br />
inverse · 23<br />
isequal · 4<br />
isfullrank · 25, 10<br />
isprime · 3, 6<br />
issingular · 26<br />
isstable · 28<br />
K<br />
Kronecker canonical <strong>for</strong>m · 25<br />
L<br />
Laurent series · 61<br />
lcoef · 9<br />
ldf · 55<br />
ldiv · 5<br />
leading column coefficient matrix · 9<br />
leading row coefficient matrix · 9<br />
least common multiple · 3<br />
least common right multiple · 6<br />
Left-denominator-fraction · 55<br />
llm · 3<br />
lrm · 7<br />
LTI · 66, 71<br />
lu · 11<br />
M<br />
matrix denominator · 52<br />
matrix division · 4, 59<br />
matrix division with remainder · 4<br />
matrix divisors · 4<br />
matrix multiples · 4<br />
matrix numerator · 52<br />
matrix pencil · 25<br />
109<br />
matrix pencil Lyapunov equations · 26<br />
matrix pencil routines · 24<br />
matrix polynomial equations · 15<br />
mdf · 53<br />
minbasis · 27<br />
minimal basis · 27<br />
Moore-Penrose pseudoinverse · 25<br />
multiple · 30<br />
left · 7<br />
multiplication · 10, 22, 57<br />
N<br />
null · 26<br />
null space · 25, 26<br />
O<br />
object<br />
LTI · 66<br />
one-sided equations · 16<br />
P<br />
para-Hermitian · 21<br />
pencan · 25<br />
pinit · 8<br />
pinv · 25<br />
plyap · 27<br />
pol · 17, 49<br />
polynomial<br />
basis · 27<br />
polynomial matrix editor · 43<br />
main window · 43, 44<br />
matrix pad · 43<br />
polynomial matrix Editor<br />
matrix pad · 45<br />
polynomial matrix equations · 13, 36<br />
polynomial matrix fraction<br />
computing with · 57<br />
<strong>Polynomial</strong> matrix fractions · 47<br />
Popov <strong>for</strong>m · 12<br />
prand · 4<br />
prime<br />
relatively · 3<br />
product · 22<br />
pseudoinverse · 25
Q<br />
quick start · 8<br />
quotient · 2<br />
R<br />
range · 26<br />
rank · 25, 26<br />
normal · 26<br />
rdf · 54<br />
red · 50<br />
Reduce · 50<br />
reduced <strong>for</strong>ms · 9<br />
Resampling · 39<br />
reverse · 51<br />
right matrix fraction · 53<br />
Right-denominator-fraction · 54<br />
roots · 27<br />
roots · 27<br />
row degrees · 9<br />
row reduced<br />
<strong>for</strong>m · 10<br />
rowred · 10<br />
S<br />
Sampling period · 39<br />
scalar denominator · 47<br />
Scalar-denominator-fraction · 47<br />
sdf · 49<br />
simple operations with polynomial matrices · 10,<br />
57<br />
slash · 54<br />
smith · 13<br />
Smith <strong>for</strong>m · 12<br />
span · 26<br />
spcof · 22<br />
spectral co-factorization · 21, 38<br />
110<br />
spectral factorization · 21, 37<br />
spf · 22<br />
stability · 27<br />
staircase <strong>for</strong>m · 10<br />
startup · 8<br />
state space model · 64<br />
state space system · 24<br />
submatrices · 12, 19, 60<br />
subtraction · 10, 22, 57<br />
sum · 22<br />
sylv · 30<br />
Sylvester matrix · 29<br />
symbolic<br />
polynomial · 75<br />
Symbolic Math <strong>Toolbox</strong> · 73, 75<br />
T<br />
time delay · 71<br />
Transfer function · 62<br />
trans<strong>for</strong>mation<br />
to Clements <strong>for</strong>m · 26<br />
to Kronecker canonical <strong>for</strong>m · 25<br />
transpose · 14<br />
transposition · 13, 61, 62<br />
tri . · 10<br />
triangular <strong>for</strong>m · 10, 11<br />
tutorial · 16, 28, 47<br />
two-sided equations · 16<br />
U<br />
unimodular · 23<br />
Z<br />
zeros · 27, 33<br />
zpplot · 28