Description
In this computing assignment, you will study the behaviour of floatingpoint errors when the Gaussian elimination (GE) algorithm with partial pivoting (as implemented in Matlab’s “backslash” command) is applied to random matrices of widely differing sizes.
You have already seen in class that
when solving systems with small matrices, the errors remain small. However, even for stateoftheart implementations of GE, these errors can become significant when the matrix is very large. The
purpose of CA3 is to answer the question: How large is “large”?
To construct a random matrix system for which the exact solution (or at least a pretty good approximation of it) is known, we’ll make use of a simple trick:
• Construct the N × N matrix A with random entries chosen from the interval [−1, 1].
• Let x = [1, 1, . . . , 1]T be an Nvector of ones – this is our exact solution.
• Compute the right hand side corresponding to any particular choice of A to be b = Ax.
Then, we compute the approximate solution y = A \ b using Matlab’s backslash operator. To measure
the error between x and y, let
δ = max
i=1,…,N
xi − yi

which is the maximum componentwise difference between the two Nvectors†
.
Because A is a random matrix, no single A gives a good measure of the actual error in GE; instead,
we must run this calculation for a number of trials with different realizations of A in order to get a
sense for the representative error.
For this purpose, let M be the number of trials and suppose that the
solution error for the k
th trial is δ
(k)
. Then we can define the mean or average error as
EN =
1
M
δ
(1) + δ
(2) + · · · + δ
(M)
which should be a better measure of error for an “average” random matrix A.
To help you get started, I have posted the simple code GERandom.m on Canvas, which performs
the computations just described for given values of M and N. The code determines the mean error
EN over M random trials and then generates a scatter plot showing the error in each trial.
Note that
this plot uses a log scale for error (using Matlab’s semilogy command) so that large changes in the
error magnitude are easier to see. Experiment with this code and observe how increasing N leads to
an increase in the mean error EN .
†You’ll see later in section 3b that this is just the vector maxnorm or infnorm, denoted δ = kx − yk∞.
This leads us to the goal of CA3 which is . . .
GOAL: To determine the size of the matrix N∗
for which the mean error in Gaussian elimination reaches EN∗ ≈ 1. In other words, how large does a random
matrix A have to be before the roundoff errors in GE are the same order of magnitude as the solution x? Or, when does relative error exceed 100%!
In practice, your computer will not have the processing power to find N∗ directly because these
matrices are simply too huge. Instead, you should compute the mean error EN for a sequence of more
“moderate” sized N values and plot the data. You will then observe that your data points lie along
a predictable curve, which you may then extrapolate to even larger values of N to estimate roughly
where the curve hits the value EN ≈ 1.
Here is a detailed list of what is required in your 1page report:
(a) Modify the code GERandom.m to turn it into a function with this calling sequence
function EN = GERandom(N, M)
which should make your later calculations much easier. You can also remove any unneeded
output or plotting statements.
(b) Use your modified code to experiment with different N and M, choosing values that are as large
as possible without forcing your code to take too long to execute (limit yourself to 5–10 minutes
of clock time at most). You need to strike a balance: you want your number of trials M to be
large enough to get an accurate estimate of the average error, but you also want N as large as
possible to obtain an accurate extrapolation. Carefully justify your final choice of N and M.
(c) Plot your EN versus N for different values of N ranging between 5 and the maximum value you
chose in (b). Use a loglog scale, which in Matlab can be done in two ways:
loglog(N, EN) or plot(log10(N), log10(EN))
where N and EN are vectors containing your computed data points.
(d) Observe that your data lies roughly along a straight line! This suggests seeking a linear function
that fits your loglog data, or in other words to find constants p1 and p2 so that log10(EN ) ≈
p1 log10(N) + p2. You can do this manually or else run the following Matlab command
p = polyfit(log10(N), log10(EN), 1)
which fits a polynomial of degree 1 to your loglog data, returning a 2vector p that contains the
coefficients of the linear fit‡
. Add this line to your data plot and comment on how accurately it
approximates the data.
(e) Estimate N∗ by extrapolating your linear function to the point where it hits the value EN = 1 (or
log10 EN = 0).
‡You’ll learn more in section 4c about the algorithm implemented in polyfit to perform this curve fitting.