Description
CA2 – Explorations in Root-Finding
In this assignment, you will apply a combination of analytical techniques and root-finding algorithms to
study the roots of the nonlinear equation
f(x) = cos(x) + 1
1 + e−2x
.
(a) Start by plotting f(x) on the interval x ∈ [−6π, 6π] and describe the overall behaviour of the function,
as well as the number and rough location of its roots. Use the “zoom” feature of Matlab’s plotting
window (or change the axis limits with set or axis commands) to make sure that you identify all
roots – you may have to increase your plotting point density in order to view f in sufficient detail!
(b) Investigate analytically what happens to the function for large |x| by computing the following two
limits of the exponential term in f(x):
lim x→−∞
1
1 + e−2x
and lim x→+∞
1
1 + e−2x
.
Use these results to determine two simpler “limit functions” that approximate the negative and positive “halves” of f(x):
• f−(x) for x < 0, which approximates f for large negative values of x,
• f+(x) for x > 0, which approximates f for large positive values of x.
Plot f±(x) on their respective half-intervals along with the original function f(x). Then derive analytically the exact values for all roots of f−(x) (for x < 0) and f+(x) (for x > 0). Add these roots to
your plot and comment on how well they seem to approximate the actual zeros of f(x).
(c) Next, apply the bisection method to compute to within an absolute error tolerance of 10−6
the smallest† negative root of f(x) (call it x
∗
).
Use the bisect2.m code from lectures and choose an initial
bracket that is motivated by your plot from part (b). Using an appropriate error measure, compare
your computed root x
∗
to the smallest root of f−(x) (call it x−) that you determined analytically in
part (b). How well does x− approximate your bisection result x
∗
?
‡
Use bisection to compute the next two negative roots of f(x) and compare them with the corresponding roots of f−(x). What do you notice about the accuracy of your approximations?
†By “smallest” I mean the smallest in magnitude or closest to zero.
‡Here I want you to think of the bisection result as your “exact” root, and the analytical result as the “approximate” root.
(d) Repeat part (c) to compute the four smallest positive roots of f(x) and compare them with roots
of f+(x). Explain how you choose an appropriate initial bracket for each root, and compare your
computed roots with the corresponding estimates x+ from f+(x).
(e) Consider the following fixed point iteration§
:
xk+1 = g(xk) = arccos
−1
1 + e−2xk
. (∗)
Show that finding a fixed point of g(x) is equivalent to finding a root of f(x) = 0. Use the code
fixedpt.m from lectures to approximate two roots:
• the smallest negative root from part (c) using an initial guess of x0 = −1.5, and
• the smallest positive root from part (d) using an initial guess of x0 = 3.0.
Compare with the results you obtained using the bisection method, making sure that you apply the
same stopping criterion for x – note that this may require modifying the fixedpt code!
Describe any unexpected behaviour you observe. Can you explain the convergence of your fixed
point iterations (perhaps with the help of a sketch)? What is the real problem with using the fixed
point function in (∗) to try to compute all roots of f(x)?
§The Matlab built-in function for the “arccos” or inverse cosine function is called acos.