Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
Midterm Solution
Problem 1
1. True. Let Ax = λx with λ ∈ C and non-zero x ∈ Cn. Because A2 = A, Ax = A2x = Aλx =
λ2x = λx, which implies that (λ2 − λ)x = 0. Since x contains at least one non-zero element,
λ2 − λ = 0 must hold. It follows that λ = 0 or 1.
2. True. Note that the rank of a zero matrix is 0 and its dimension of kernel (null) space is n by the
rank-nullity theorem. Besides, Ker(A) = Ker(ATA) (Since if Ax = 0, ATAx = 0. Conversely,
if ATAx = 0, Ax = 0 follows from xTATAx = 0). Therefore, by the rank-nullity theorem,
rankA = n − dimKer(A) = n − dimKer(ATA) = 0, which implies that A is zero matrix. (An
alternative proof is that tr(ATA) = 0 implies
m∑
i=1
n∑
j=1
a2ij = 0. Thus aij = 0 where aij denotes the
element in the ith row and jth column of A)
3. True. Since A−B is real symmetric, there exist an orthogonal matrixO and a real diagonal matrix
D with diagonal elements {d1, d2, ..., dn} such that A − B = OTDO. From xT (A − B)x = 0,
xTOTDOx = 0. We can construct a series of vectors xi such that Oxi = ei where ei is the
standard unit vector (only i th element is 1 and other elements are 0 in this vector). Since
di = e
T
i Dei = 0 for i = 1, 2, ..., n, we conclude that D = 0 and it follows that A − B = 0 and
A = B.
Problem 2
1. Let A have distinct eigenvalues λ1, ..., λn with corresponding eigenvectors x1, ..., xn. If follows
that Axi = λixi for i = 1, ...n. Since A commutes with B, ABxi = BAxi = Bλixi = λiBxi
which implies that Bxi is also an eigenvector of A with respect to eigenvalue λi (when Bxi = 0,
the following analysis still holds). Therefore, Bxi lies in the eigenspace spanned by xi so that
there exists µi such that Bxi = µixi. Hence (µi, xi) is an eigenvalue and eigenvector pair for B,
for each i = 1, ...n. Let O = [x1, x2, ..., xn] and D be a diagonal matrix with diagonal elements
{µ1, µ2, ..., µn}, then BO = OD and B = ODO−1. Therefore, B is diagonalizable.
2. Let Λ be a diagonal matrix with diagonal elements λ1, ..., λn, then A = OΛO
−1. Since B =
ODO−1, it is sufficient to prove that there exist coefficients a0, a1, ..., an−1 such that D =
an−1Λn−1+ an−2Λn−2+ ...+ a1Λ+ a0I. Since D and Λ are diagonal matrices, µi = an−1λn−1i +
an−2λn−2i + ...+ a1λi+ a0 should hold for i = 1, ..., n. Equivalently, there should exist a solution
for the linear equation
1 λ1 · · · λn−11
1 λ2 · · · λn−12
...
...
. . .
...
1 λn · · · λn−1n
a0
a1
...
an−1
=
µ1
µ2
...
µn
1
where the leftmost square matrix denoted as V is a Vandermonde matrix and
det(V ) =
∏
1≤i(λj − λi)
is non-zero since A has distinct eigenvalues. Thus V is non-singular and there exists a unique
solution for the linear equation, which completes the proof.
Problem 3
1. Characteristic polynomial of A: A− λI = −λ3 + 2λ2 − λ = −λ(λ− 1)2.
Therefore, eigenvalues of A: 0 or 1.
2. Algebraic multiplicity for λ = 0 is 1, and for λ = 1 is 2.
3. Geometric multiplicity for λ = 0 is 1, and for λ = 1 is 1.
4. A matrix is diagonalizable if and only if the algebraic multiplicity equals the geometric multi-
plicity of each eigenvalue. Therefore, matrix A is not diagonalizable
5. For Av1 = 0, v1 =
0−1
2
, For (A− I)v2 = 0, v2 =
1−1
5
, For (A− I)v3 = v2, v3 =
03
−5
6. P =
0 1 0−1 −1 3
2 5 −5
and rank of matrix P is 3. J = P−1AP =
0 0 00 1 1
0 0 1