ELE7029 Linear Matrix Inequalities
Linear Matrix Inequalities
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
ELE7029: Optimal Digital Control System
Lecture III
Linear Matrix Inequalities
Goals of this lecture
· LMIs
· LMI Problems
· Solving LMIPs
1. Linear Matrix Inequalities
2. Solving LMIPs
Linear Matrix Inequalities
An inequality of the form
F(x) = F0 + x1F1 + · · ·+ xnFn = F0 +
n∑
i=1
xiFi ≥ 0, (3.1)
is called linear matrix inequality (LMI)
Here
x = (x1, · · · ,xn) ∈ Rn a vector of unknown scalars (decision,
optimization variable)
Fi = FTi ∈ Rm×m, i= 0, · · · ,n, are given symmetric matrices
F : Rn→ Rm×m is an affine fcn. of the variable x ∈ Rn
The canonical form is very inefficient from a storage point of view.
In most applications, LMIs arise rather in the form
L (X1, . . . ,Xm)where
L(·) and R(·) are affine functions of
some structured matrix variables X1, . . . ,Xm.
e.g. An affine function of x ∈ R2
F(x) =
x1 + x2 x2 + 1
x2 + 1 x3
= F0 + x1F1 + x2F2 + x3F3 ≥ 0
where
F0 =
0 1
1 0
, F1 =
1 0
0 0
, F2 =
1 1
1 0
, F3 =
0 0
0 1
The LMI F(x)≥ 0 is equivalent to a set of nonlinear inequalities:
x1 + x2 ≥ 0,
x3 ≥ 0,
(x1 + x2)x3 − (x2 + 1)2 ≥ 0
e.g. Lyapunov inequality
Consider a system x˙ = Ax with A ∈ Rn×n.
Suppose the Lyapunov inequality ATY + YA+Q≤ 0 is affine inequality of
the variable Y.
Y satisfies the Lyapunov inequality iff the quadratic form V(x) = xTYx
satisfies V˙(x)≤ −xTQx for the system x˙ = Ax.
The inequality F(Y) = ATY + YA+Q≤ 0 is an LMI in Y.
e.g. Bounded Real Lemma
Consider a system
x˙ = Ax+ Bw
z= Cx
with A ∈ Rn×n, B ∈ Rn×m, C ∈ Rp×n, and
γ > 0.
The BR lemma LMIs are
ATY + YA+ CTC YB
BTY −γI
≤ 0, Y ≥ 0
with a variable Y.
By this we mean if Y satisfies the LMIs, then the quadratic Lyapunov fcn.
V(x) = xTYx proves the RMS gain of the system is no more than γ.
e.g. A time-varying linear dynamic system x˙(t) = A(t)x(t) with
A(t) ∈ {A1, . . . ,Aq}.
Want to find a quadratic Lyapunov fcn. V(x) = xTYx: Y > 0 and V˙(x)≤ 0
for all x and all possible values of A(t)
Y > 0, ATi Y + YAi ≤ 0, i= 1, . . . ,q
We can write as LMIs
Y ≥ I, ATi Y + YAi ≤ 0, i= 1, . . . ,q
If such Y exists, it proves boundedness of trajectories of the system
x˙ = A(t)x with
A(t) = θ1(t)A1 + · · ·+ θq(t)Aq
where θi(t)≥ 0
■ S-procedure:
Let F0 = FT0 , F1 = F
T
1 ∈ Rn×n
Assume xTF1x > 0
When is it true that for all x satisfying xTF1x ≥ 0 ⇒ xTF0x ≥ 0?
The implication
xTF1x ≥ 0 =⇒ xTF0x ≥ 0
holds iff there exists τ≥ 0 s.t. F0 ≥ τF1.
Note: For a given F1 ≥ 0, find Fo ≥ 0 and τ≥ 0 satisfying Fo ≥ τF1.
Note: The S-procedure provides conditions under which a particular
quadratic inequality is a consequence of another quadratic inequality.
Let F1 = FT1 , F0 = F
T
0 ∈ Rn×n, v1, v0 ∈ Rn, and α1, α0 ∈ R.
Assume that there is some x s.t.
xTF1x+ 2v
T
1x+α1 > 0
holds.
Then the implication
xTF1x+ 2v
T
1x+α1 ≥ 0 =⇒ xTF0x+ 2vT0x+α0 ≥ 0
holds iff if there exists τ≥ 0 s.t.
F0 v0
vT0 α0
≥ τ
F1 v1
vT1 α1
e.g.
Let Li(·) be quadratic fcns
Li(ζ) = ζ
TTiζ, Ti = T
T
i ∈ Rn×n, i= 0, . . . ,p
When is it true that
ζTTiζ≥ 0, i= 1, . . . ,p =⇒ ζTToζ > 0
(hard to find)
Holds if there exist τi ≥ 0, i= 1, . . . ,p s.t.
To >
p∑
i=1
τiTi
(LMIs in To and τi’s)
e.g. Stability of a system x˙ = Ax+ d(x), ∥d(x)∥ ≤ γ∥x∥
Let us use a quadratic Lyapunov fcn V(x) = xTPx to prove stability of the
system
We need P> 0 and V˙(x)≤ −αV(x) for all x (α > 0 given):
V˙(x) +αV(x) = 2xTP(Ax+ d(x)) +αxTPx
= xT(ATP+ PA)x+ xTPd(x) + d(x)TPx+αxTPx
= xT(ATP+ PA+αP)x+ d(x)TPx+ xTPd(x)
=
x
d(x)
T
ATP+ PA+αP P
P 0
x
d(x)
≤ 0
whenever
d(x)Td(x)≤ γ2xTx
So we need P> 0 and
−
x
d(x)
ATP+ PA+αP P
P 0
x
d(x)
≥ 0
whenever
x
d(x)
T
γ2I 0
0 −I
x
d(x)
≥ 0
(hard to find)
By S-procedure, this happens iff
−
ATP+ PA+αP P
P 0
≥ τ
γ2I 0
0 −I
for some τ≥ 0
Thus, necessary and sufficient conditions for the existence of quadratic
Lyapunov fcn. can be expressed as LMI
P> 0,
ATP+ PA+αP+τγ2I P
P −τI
≤ 0
in variables P and τ.
e.g. Find P> 0 satisfying the constraints on P:
For all ξ ̸= 0 and P satisfying
ξ
η
T
ATP+ PA PB
BTP 0
ξ
η
< 0,
whenever
ξ
η
T −CTC 0
0 I
ξ
η
≤ 0.
By S-procedure, this happens iff
ATP+ PA PB
BTP 0
< τ
−CTC 0
0 I
≤ 0
for some τ≥ 0.
What if we have a matrix inequality such as
X22 > 0 and X11 − X12X−122 XT12 > 0
(nonlinear)
Find its linear equivalent form.
Good to Note: Matrix partition
Let A=
A11 A12
A21 A22
, then
A=
A11 A12
A21 A22
=
I 0
A21A
−1
11 I
A11 0
0 ∆
I A−111A12
0 I
(3.3a)
where ∆= A22 −A21A−111A12, A11 nonsingular: (Schur complement of A11 in
A)
Or
A=
A11 A12
A21 A22
=
I A12A
−1
22
0 I
∆ˆ 0
0 A22
I 0
A−122A21 I
(3.3b)
where ∆ˆ= A11 −A12A−122A21, A22 nonsingular: (Schur complement of A22 in
A)
Let X = X∗ ≥ 0 be partitioned as
X =
X11 X12
X∗12 X22
If X+22 is the pseudo-inverse of X22,
X11 X12
X∗12 X22
=
I X12X
+
22
0 I
X11 − X12X+22X∗12 0
0 X22
I 0
X+22X
∗
12 I
Find
X11 X12
X∗12 X22
≥ 0 ⇐⇒
X11 − X12X+22X∗12 0
0 X22
≥ 0
⇐⇒
X11 − X12X+22X∗12 ≥ 0
X22 ≥ 0
■ Schur complement:
For matrices Q(x) = Q(x)T, R(x) = R(x)T, and S(x) depend affinely on x:
R(x)> 0, Q(x)− S(x)R(x)−1S(x)T > 0 ⇐⇒
Q(x) S(x)
S(x)T R(x)
> 0
Note:
▶
Q S
ST R
=
I S−1R
0 I
Q− SR−1ST 0
0 R
I 0
R−1S I
▶ If
Q S
ST R
> 0, then so is the Schur complement of R in
Q S
ST R
.
▶ Nonlinear (convex) inequalities can be converted to an LMI form using
the Schur complement.
Schur complement conditions:
Let A> 0. Then
A B
BT C
> 0 ⇐⇒ A> 0, C− BTA−1B> 0.
Let C > 0. Then
A B
BT C
> 0 ⇐⇒ C > 0, A− BC−1BT > 0.
Proof: Consider the quadratic program
inf
(u,v)
u
v
T
A B
BT C
u
v
= uTAu+ 2uTBv+ vTCv
The solution of the minimization problem satisfies:
u
v
∗
= 0 (and attained uniquely at 0) ⇐⇒
A B
BT C
> 0.
First, solve
min
u
u
v
T
A B
BT C
u
v
= min
u
uTAu+ 2uTBv
+ vTCv
The optimum u is
d
du
=⇒ 2Au+ 2Bv= 0 ⇐⇒ u∗ = −A−1Bv
Observe
uTAu= uTATu= (−A−1Bv)TAT(−A−1Bv) = vTBTA−1Bv
and
uTBv= vTBTu= −vTBTA−1Bv
which leads to
uTAu+ 2uTBv
+ vTCv= vT
C− BTA−1B v
Now if we minimize over v, then the minimum is achieved and attained
uniquely at 0 iff C− BTA−1B> 0. □