Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
Final Assessment - MATH0058
All questions must be attempted.
You must submit your solution as a single pdf file. The following options are allowed:
• Write all formulas and code together into one Jupyter Notebook and export to pdf. This may
not work on every platform where Jupyter is installed and you should test it.
• Separately write the theory part into a Word or similar document, and add the codes and
output as screenshots to it. Then convert to pdf. This is slightly more complex but should
work on most setups.
Question 1
In this question we want to explore the effects of floating point accuracy and the sensitivity of
problems with respect to small perturbations.
Q1a State the definitions of backward error, forward error, and condition number. How are these
quantities connected? Which of these are due to the sensitivity of a problem and which are due to
a specific algorithm?
Q1b We consider the root finding problem P(a, z) = 0 with P(a, z) = z2a2 + za1 + a0. Here,
a =
[
a0 a1 a2
]T is the vector of coefficients of the polynomial. Let z be a solution of P(a, z) = 0
for given coefficients a. Show that if we perturb a single coefficient aj by an amount of ∆aj to
first order the zero z is perturbed by ∆z = −z
j∆aj
P′(z) . As a slightly more advanced question try to
derive this perturbation results for polynomials of arbitrary order and not just degree 2. Based
on the perturbation result, what is the resulting condition number of a zero of P with respect to
a perturbation in the coefficient aj? (Hint: You need not give a formal proof. Consider that the
condition number is the ratio of relative forward and relative backward error.) What happens
with the condition number if the polynomial has two zeros that are close together? Comment on
the ability to accurately compute the roots of polynomials with nearby roots.
Q1c We now want to visualize the effect of small perturbations in the coefficients. Implement the
following function.
[1]: def perturbation_roots(coeffs, eps=1E-10, nrepeats=10):
"""
This function takes a polynomial z**n * coffs[0] + ... z**1 * coeffs[n-1] +
↪→coeffs[n]
and computes roots of random perturbations of this polynomial using the
np.roots function.
The perturbation is a vector of normally distributed random numbers with
2-norm eps. The function does nrepeats repetitions and at the end
returns a single one-dimensional Numpy array of type complex128
that containes all the different nrepeats * n roots of
all perturbed polynomials.
"""
1
# Delete the following line to implement your own code.
pass
Once you have implemented your function you should test it with the famous Wilkin-
son polynomial. It is defined as P(z) = ∏20j=1(z − j). Hence, its zeros are the integers
from 1 to 20. Use the perturbation_roots function to generate a single plot that shows
all the roots of this polynomial and its perturbations for the value eps=1E-10. For this
you will need the coefficients of the Wilkinson polynomial. You find them for example at
https://blogs.mathworks.com/cleve/2013/03/04/wilkinsons-polynomials/, a blog post on the
Wilkinson polynomial by Cleve Moler (the inventor of Matlab). We also give them below. The
zeros should be plotted as blue crosses. Also plot the exact zeros of the Wilkinson polynomial as
red circles.
Numerically evaluate the condition number of the root z = 15 with respect to perturbations in the
coefficients aj, separately and print the result. Interpret what you see in the graph with respect
to the computed condition numbers. To evaluate the derivative you can use the Numpy function
numpy.polyder.
[150]: # The following code gives the coefficients of the leading order monomial first,
↪→i.e. P(z) = z^{20}...
wilkinson_coeffs = [
1,
-210,
20615,
-1256850,
53327946,
-1672280820,
40171771630,
-756111184500,
11310276995381,
-135585182899530,
1307535010540395,
-10142299865511450,
63030812099294896,
-311333643161390640,
1206647803780373360,
-3599979517947607200,
8037811822645051776,
-12870931245150988800,
13803759753640704000,
-8752948036761600000,
2432902008176640000,
]
2
Question 2
Let A ∈ Rn×n be symmetric and positive definite, that is A = AT and all eigenvalues of A are
positive. In this question we want to investigate a special variant of the LU Decomposition, called
the Cholesky Decomposition, which exists for symmetric positive definite matrices.
The Cholesky Decomposition is defined as
A = CTC
for C a nonsingular upper triangular matrix with positive entries on its diagonal.
Q2a Show that the Cholesky Decomposition exists if and only if A is positive definite. In order
to show that the Cholesky Decomposition exists if A is positive definite proceed by induction.
Assume that the Cholesky Decomposition exists for matrices of dimenion n − 1 and show that
you can construct a Cholesky Decomposition for a matrix of dimension n of the form
[
Aˆ b
bT α
]
=
[
CˆT 0
cT δ
] [
Cˆ c
0 δ
]
with δ > 0, where Aˆ = CˆTCˆ is a Cholesky Decomposition of dimension n− 1. Be careful to also
mention the induction start. In order to show that δ > 0 you may find the following formula for
the determinant of a block matrix helpful:
det
([
A B
C D
])
= det A det(D− CA−1B).
Q2b Use the Singular Value Decomposition to show that ‖A‖2 = ‖C‖22.
Q2c From the lectures you know that the LU Decomposition of a matrix can have a large backward
error if the growth factor ρ is large. Based on this, give a heuristical argument why you expect the
Cholesky Decomposition to be backward stable.
Q2d The decomposition in a) suggests an algorithm for computing the Cholesky Decomposition
by starting at the upper left element a11 of A and successively computing the Cholesky Decompo-
sition for larger and larger submatrices until you have computed the Cholesky Decomposition of
the whole matrix.