## Description

1 Introduction

We saw in the last lab that the interpolating polynomial could get worse (in the sense that values at

intermediate points are far from the function) as its degree increased. This means that our strategy of using

equally spaced data for high degree polynomial interpolation is a bad idea. In this lab, we will test other

possible strategies for spacing the data and then we will look at an alternative way to do interpolation with

a guarantee that the error will get smaller when we take more points.

2 Exercise 1

This lab will generate polynomial interpolants for a few different functions on different sets of points. We

will be comparing the accuracy of the interpolating polynomials, just as we did last lab. Instead of repeating

basically the same code all the time, it is more convenient to automate the process in a special code which

can test interpolation for different functions on different sets of points.

In this exercise, you will construct a utility code that will take as input func, the name of a function to

be interpolated and a set of points xdata on which to perform the interpolation. In the previous lab, the

points xdata were uniformly distributed but in this lab we will investigate non-uniform distributions. The

interpolation error will be computed by computing the maximum error on a much larger number of test

points. The function uses eval lag.py and lagrangep.py from last lab.

The function looks like this:

def t e s t p o l y i n t e r p o l a t e ( func , xdata ) :

””” e r r = t e s t p o l y i n t e r p o l a t e ( func , x d a t a )

u t i l i t y f u n c t i o n used f o r t e s t i n g p ol yn om i al i n t e r p o l a t i o n

In pu t :

func i s t h e f u n c t i o n t o be i n t e r p o l a t e d

x d a t a are t h e argumen ts where t h e f u n c t i o n i s t o be e v a l u a t e d .

Output :

e r r i s t h e maximum d i f f e r e n c e be tween t h e f u n c t i o n and i t s i n t e r p o l a n t

”””

import m a t pl o tli b . p y pl o t a s p l t

import numpy a s np

# Ev al u a te f u n c t i o n a t d a t a l o c a t i o n s .

ydata = func ( xdata )

# C on s t ruc t n v al e v e nl y spaced p o i n t s x v al , be tween x d a t a [ 0 ] and x d a t a [ −1 ] .

# Ev al u a te y v a l a t t h e s e p o i n t s , u s i n g Lagrange i n t e r p o l a t i o n .

n v al = 4001

x v al = ? ? ?

y v al = e v a l l a g ( ? ? ? )

# Compute er r o r , compar ing i n t e r p o l a t e d t o e x a c t v a l u e s .

ye x ac t = ? ? ?

1

e r r = max ( abs ( ye x ac t − y v al ) ) / max ( abs ( ye x ac t ) )

# O p t i o n all y , p l o t d a t a .

p l t . pl o t ( xval , y v al )

p l t . pl o t ( xval , ye x ac t )

p l t . show ( )

return e r r

1. Starting from the above text, create a file test poly interpolate.py and complete the statements that

have question marks in them.

2. Test test poly interpolate() for the Runge function f(x) = 1/(1 +x

2

), defined by runge.py, on the

interval [−π, π] with uniformly spaced points. Your results should match the following table:

Runge function, evenly spaced points [-pi,+pi]

ndata err

5 0.31327

11 0.58457

21 3.8607

3. Create a file exercise1.py to fill in the following table. Construct xdata containing ndata=5 evenly

spaced points between -5 and +5 (this is not the same interval as above) and use test poly interpolate()

to plot and evaluate the error between the interpolated polynomial and the exact values of the Runge

function, and fill in the following table.

Runge function, evenly spaced points [-5,+5]

ndata err

5 __________

11 __________

21 __________

After looking at the plotted comparisons between the Runge example function and its polynomial fit, you

might guess that the poor fit is because the points are evenly spaced. Maybe they should be concentrated

near the endpoints, so they don’t oscillate wildly there, or near the center, if the oscillations are caused by

too many points near the endpoints. Let’s examine these two hypotheses.

The strategy used in the next exercise is to use a function to change the distribution of points. That is,

pick a nonlinear function f that maps the interval [−1, 1] into itself (we will use x

2 and x

1/2

) and also pick an

affine function g that maps the given interval, [−5, 5] into [−1, 1]. Then use the points xk = g

−1

(f(g(˜xk)) =

g

−1 ◦ f ◦ g(˜xk), where ˜xk are uniformly distributed.

3 Exercise 2

We have been looking at evenly-spaced points between -5 and 5. One way to concentrate these points near

zero would be to recall that, for numbers less than one, |x|

2 < |x|. Hence, if xdata represents a sequence

of numbers uniformly distributed between -5 and 5, then the Python expression 5 * ( sign(xdata) * abs

(xdata/5)**2 ) yields a similar sequence of numbers concentrated near 0. (The sign() and abs() functions

are used to get the signs correct for negative values.)

Create a script exercise2.py that carries out the following tasks:

1. Construct 11 points uniformly distributed between [−5, 5] in a vector called xdata1. Now shift this

data closer to 0 using the following transformation:

2

xdata2 = 5 ∗ ( np . si g n ( xdata1 ) ∗ np . abs ( xdata1 / 5 ) ∗∗2 )

Look at how the points have been transformed:

p l t . pl o t ( xdata1 , np . z e r o s ( xdata1 . shape ) , ’ bo ’ )

p l t . pl o t ( xdata2 , np . one s ( xdata2 . shape ) , ’ r o ’ )

You do not need to send me this plot.

2. Use this transformation and the test poly interpolate() utility to fill in the following table for the

Runge function with ndata points concentrated near 0.

Runge function, points concentrated near 0, [-5,+5]

ndata err

5 __________

11 __________

21 __________

Did this transformation improve the behavior of the error?

4 Exercise 3

Using a different transformation, we can concentrate the data points more towards the endpoints of the

interval. We can do this using the sign(x) ∗

p

|x| function.

1. Create a file exercise3.py which carries out this experiment.

2. Start with ndata=11 to define equally spaced data points xdata1 over the interval [-5,+5]. Then use

the transformation xdata2 = sign(xdata1) ∗ 5 ∗

p

|xdata1/5|.

3. Plot your points using the following command.

p l t . pl o t ( xdata1 , np . z e r o s ( xdata1 . shape ) , ’ bo ’ )

p l t . pl o t ( xdata2 , np . one s ( xdata2 . shape ) , ’ r o ’ )

Please send me this plot.

4. Use this transformation and test poly interpolate() to fill in the following table for the Runge

function with ndata points concentrated near the endpoints.

Runge function, points concentrated near endpoints, [-5,+5]

ndata err

5 __________

11 __________

21 __________

Has this transformation improved the error behavior?

You could ask why I chose x

2 and out of the many possibilities. Basically, it was an educated guess. It

turns out, though, that there is a systematic way to pick optimally-distributed (Chebyshev) points. It is

also true that the Chebyshev points are closely related to trigonometric interpolation, discussed last time.

5 Chebyshev Points

We were able to improve the error behavior of our interpolation scheme using a transformation of our equally

spaced data points. Our choice of the transformation x

1/2 was just a lucky guess. However, it can be shown

that there is, in general, a best transformation, known as the Chebyshev points.

3

It can be shown that the interpolation error between a function f and its polynomial interpolant p at

any point x is given by an expression of the form:

f(x) − p(x) =

(x − x1)(x − x2). . .(x − xn)

n!

f

n

(ξ) (1)

where ξ is an unknown nearby point. This is something like an error term for a Taylor’s series.

We can’t do a thing about the expression f

n(ξ), because f is an arbitrary (smooth enough) function,

but we can affect the magnitude of the bracketed expression. For instance, if we are only using a single

data point (n = 1) in the interval [10,20], then the very best choice for the data point would be x1 = 15,

because then the maximum absolute value of the expression

(x − x1)

1!

would be 5. The worst value would be to choose x at one of the endpoints, in which case we’d double the

maximum value. This doesn’t guarantee better results, but it improves the chance.

For −1 ≤ x ≤ +1, the Chebyshev polynomial of degree n is given by

Tn(x) = cos(n cos−1 x) (2)

This formula doesn’t look like it generates a polynomial, but the trigonometric formulas for sums of angles

and the relation that sin2

θ + cos2

θ = 1 can be used to show that it does.

These polynomials satisfy an orthogonality relationship and a three-term recurrence relationship:

T0(x) = 1

T1(x) = x (3)

Tn(x) = 2xTn−1(x) − Tn−2(x)

The facts that make Chebyshev polynomials important for interpolation are

• The peaks and valleys of Tn are smallest of all polynomials of degree n over [-1,1] with Tn(1) = 1.

• On the interval [-1,1], each polynomial oscillates about zero with peaks and valleys all of equal magnitude.

Thus, when {x1, x2, . . . , xn} in (1) are chosen to be the roots of Tn(x), then the bracketed expression in (1)

is proportional to Tn, and the bracketed expression is minimized over all polynomials.

The Chebyshev points are the zeros of the Chebyshev polynomials. They can be found from (2):

cos(n cos−1 x) = 0,

n cos−1 x = (2k − 1)π/2,

cos−1 x = (2k − 1)π/(2n),

x = cos

(2k − 1)π

2n

For a given n, the Chebyshev points on the interval [a, b] can be constructed as follows:

1. Pick equally-spaced angles θk = (2k + 1)π/(2n), for k = 0, 1, . . . , n − 1.

2. The Chebyshev points on the interval [a, b] are given as

xk = 0.5(a + b + (a − b) cos θk),

.

4

6 Exercise 4

1. Write a code cheby points.py with the signature:

def c h e b y p oi n t s ( a , b , ndata ) :

””” x d a t a = c h e b y p o i n t s ( a , b , nda ta )

. . . comments

”””

xdata = ? ? ?

return xdata

that returns in the vector xdata the values of the ndata Chebyshev points in the interval [a,b]. You

can do this in 3 lines using vector notation:

k = np . a r an ge ( ndata )

t h e t a = ( v e c t o r e x p r e s si o n i n v o l v i n g k )

xdata = ( v e c t o r e x p r e s si o n i n v o l v i n g t h e t a )

or you can use a for loop.

2. Create a file exercise4.py for the following experiments.

3. Compute the Chebyshev points over [-1,+1] for ndata=7.

4. Compute the Chebyshev points over [0,+1] for ndata=7.

5. Compute the Chebyshev points over [-5,+5] for ndata=5.

You should observe that the Chebyshev points are not uniformly distributed on the interval but they are

symmetrical about its center. It is easy to see from (2) that this fact is true in general.

7 Exercise 5

1. Create a file exercise5.py which carries out this experiment.

2. Use the Chebyshev points to interpolate the Runge function over the interval [-5,+5], compute the

interpolation error, and fill in the following table.

Runge function, Chebyshev points, [-5,+5]

ndata err

5 ________

11 ________

21 ________

41 ________

8 Exercise 6

You will numerically examine the hypothesis that the Chebyshev points are the best possible set of interpolation points. You will do this by running many tests, each one using a randomly-generated set of interpolation

points. If the theory is correct, none of these tests should result in an error smaller than the error obtained

using the Chebyshev points.

1. Create a file interpolate error.py by copying test poly interpolate.py and removing (or commenting out)

all the plot statements. This new function should only compute the interpolation error.

2. The function np.random.rand(n) generates a vector filled with n random numbers uniformly distributed in the interval [0, 1]. But we are interested in the interval [-5,+5]. Why will the following

statements be useful?

5

x = 1 0. 0 ∗ ( np . random . rand ( 1 9 ) − 0. 5 )

xdata = np . c o n c a t e n a t e ( ( [ −5] , x , [ 5 ] ) )

3. Use the following loop to test a number of different sets of 21 interpolation points. This loop should

take much less than a minute.

e r r = np . z e r o s ( 500 )

fo r k in range ( 0 , 500 ) :

x = 1 0. 0 ∗ ( np . random . rand ( 1 9 ) − 0. 5 )

xdata = np . c o n c a t e n a t e ( ( [ −5] , x , [ 5 ] ) )

e r r [ k ] = i n t e r p o l a t e e r r o r ( runge , xdata )

4. Compute and report the largest and smallest observed values of err. How do they compare with the

error you found in the previous exercise for 21 Chebyshev points?

It is, in fact, possible for a randomly-generated set of points to yield a smaller error than the Chebyshev

points when computed using test poly interpolate(). The Chebyshev points are not necessarily optimal

for the Runge function – they are optimal over the set of all smooth functions. It is also possible because

test poly interpolate() computes the error at only 4001 points.

The larger the number of trials, the more rigorous this kind of testing becomes. It is never, of course, a

proof, but it can be a way of discovering a counterexample.

9 Bracketing

Another approach to interpolation is to break the interval up into smaller pieces, perform an interpolation

over each subinterval, and then piece together the results into a piecewise interpolant.

We are going to consider interpolation using piecewise linear functions, instead of polynomial functions.

That means that, in order to evaluate the interpolating function we first must know in which of the intervals

(pieces) x lies, and then compute the value of the function depending on the endpoints of the interval. This

section addresses the issue of how to find the left and right end points of the subinterval on which x lies.

We will write a utility function to perform this task. It’s one of those things that’s very easy to describe,

and not quite so easy to program.

Suppose we are given a set of n abscissas, in increasing order

x0 < x1 < · · · < xn−1

with functional values yn for n = 0, 1, . . . , n − 1 that together define a piecewise linear function `(x). In

order to evaluate `(x) for a given x, you need to know the index, k, so that x ∈ [xk, xk+1]. This situation is

illustrated below, with n = 4 data values, making 3 subintervals. Our point of interest x ∈ [x1, x2] so k = 1.

6

x0

y0 r

(x0, y0)

x1

y1 r

(x1, y1)

❏

❏

❏

❏

❏

❏

❏

❏

❏

x2

y2 r

(x2, y2)

✁

✁

✁

✁

✁

✁

✁

✁

✁

✁

✁

✁

x3

y3 r

(x3, y3)

x

✻

Denote n by ndata and the x values by the vector xdata. We will assume that the components of xdata

are strictly increasing. Now suppose that we are given a value xval and we are asked to find an integer

named left index so that the subinterval [ xdata(left index), xdata(left index+1) ] contains xval.

The index left index is that of the left end point of the subinterval, and the index (left index+1) is that

of its right end point.

The following clarifications are needed. We seek an integer value left index so that one of the following

cases holds.

Case 1 If xval is less than xdata[0], then regard xval as in the first interval and left index=0; or

Case 2 If xval is greater than or equal to xdata[ndata-1], then regard xval as in the final interval and

left index=ndata-2 (read that value carefully!); or,

Case 3 If xval satisfies xdata[k]<=xval and xval<xdata[k+1], then left index=k.

Eventually, we will want to allow xval to be a vector of values, but we will start simply assuming that

xval is a scalar.

10 Exercise 7

1. Write a code scalar bracket.py to do the bracketing operation. The xdata vector lists the data values

in ascending order, and xval is a scalar value which we want to bracket between two consecutive xdata

values.

def s c a l a r b r a c k e t ( xdata , x v al ) :

””” l e f t i n d e x = s c a l a r b r a c k e t ( xda ta , x v a l )

. . . comments

”””

ndata = ? ? ? ”number o f x data p oi n t s ” ? ? ?

# Case 1

i f ( ” x v al i s r a t h e r low ” ) :

l e f t i n d e x = ? ? ?

# Case 2

e l i f ( ” x v al i s r a t h e r hi gh ”

l e f t i n d e x = ? ? ?

# Case 3

e l s e :

fo r k in range ( 0 , ndata − 1 ) :

i f ( ” x v al d oe s something ” ) :

l e f t i n d e x = ? ? ?

break

ra is e Excep ti on ( ’ S c a l a r b r a c k e t : F ail u r e ! ’ )

return l e f t i n d e x

2. Create a file exercise7.py which will test your bracketing code.

3. To simplify things, use ndata=5 equally spaced values between 0.0 and 4.0;

4. Use your bracketing code to report the correct value of left index for of the following values: x=[-1.0,

0.5, 2.0, 2.9, 3.1, 5.8].

11 Exercise 8

You will see how the same task can be accomplished using vector statements. The code presented below is

written to be as efficient as possible for the case of very large vectors xval.

1. Copy the following to a file bracket.py.

def b r a c k e t ( xdata , x v al ) :

””” l e f t = b r a c k e t ( xda ta , x v a l )

b r a c k e t ( ) b r a c k e t s a v al u e .

In pu t :

x d a t a : an a scen d ing arr a y o f da ta

x v a l : one or more v a l u e s t o be b r a c k e t e d

Output :

l e f t : l e f t [ i ] i s t h e in dex o f t h e l a r g e s t x d a t a l e s s than or

e q u al t o x v a l [ i ] .

i f x v a l [ i ] i s sm a l l e r than t h e s m a l l e s t xda ta , l e f t [ i ] i s s e t t o 0 .

”””

import numpy a s np

n v al = len ( x v al )

l e f t = np . z e r o s ( nval , dtype = np . int )

i = np . argwhere ( x v al < xdata [ 1 ] )

l e f t [ i ] = 0

ndata = len ( xdata )

fo r k in range ( 1 , ndata − 1 ) :

i = np . argwhere ( xdata [ k ] <= x v al )

l e f t [ i ] = k

return l e f t

2. Suppose that the letters a,b,c,…,z represent an ascending set of numbers. Suppose xval=[b,m,f,g,h,a].

Then we can ask, for every entry of xval, whether a certain condition is True or False. So what would

be the vector of values resulting from the following?

i n d i c e s = ( g <= x v al ) and ( x v al < p )

8

3. The following computation may seem confusing at first. But the first line sets all the entries of

left indices to 0, and the second line is resetting some of those entries. What are the values stored

in left indices() after the following statements:

l e f t i n d i c e s = np . z e r o s ( x v al . s i z e )

l e f t i n d i c e s [ g <= x v al and x v al < p ] = 4

4. In bracket() there is the statement beginning “if ( np.any…”. Explain how this statement is used

to signal that something is wrong.

5. In bracket() the loop uses range(1,ndata-2), but scalar bracket() uses range(0,ndata-1). Why

do the two functions give the same results?

6. Write a code exercise8.py which calls bracket() to repeat the bracketing test of the previous exercise,

but this time computing all the results with a single call. Summarize your results in a table such as

this one.

xdata = 0.0 1.0 2.0 3.0 4.0

xval = -1.0 0.5 2.0 2.9 3.1 5.8

scalar = 0 0 2 2 3 3

bracket = ____ ____ ____ ____ ____ ____ ____

12 Piecewise Linear Interpolation

Now we are ready to consider piecewise linear interpolation. Our interpolating function will be defined by

piecing together linear interpolants that go through each consecutive pair of data points. So the resulting

interpolant is continouous but not everywhere differentiable. To handle values below the first data point,

we’ll just extend the first linear interpolant all the way to the left. Similarly, we’ll extend the linear function

in the last interval all the way to the right.

The graph of a piecewise linear function may not be smooth, but one thing is certain: it will never

oscillate between the data values. And the interpolation error is probably going to involve the size of the

intervals, and the norm of the second derivative of the function, both of which are usually easy to estimate.

As the number of points gets bigger, the interval size will get smaller, and the norm of the second derivative

won’t change, so we have good reason to hope that we can assert a convergence result:

Given an interval [a, b] and a function f(x) with a continuous second derivative over that

interval, and any sequence λn(x) of linear interpolants to f with the property that hn, the size

of the maximum interval, goes to zero as n increases, then the linear interpolants converge to f

both pointwise, and uniformly.

If C is a bound for the maximum absolute value of the second derivative of the function over the interval,

then the pointwise interpolation error and the L∞ norm are bounded by Ch2

n

, while the L

1 norm is bounded

by (b − a)Ch2

n

, and L

2 norm is bounded by (√

b − a)Ch2

n

.

Uniform convergence is convergence in the L∞ norm, and is a much stronger result than pointwise

convergence.

Thus, if the only thing you know is that your function has a bounded second derivative on [a, b], then you

are guaranteed that the maximum interpolation error you make decreases to zero as you make your intervals

smaller.

Stop and think: The convergence result for piecewise linear interpolation is so easy to get. Is it really

better than polynomial interpolation? Why did we have such problems with polynomial interpolation before?

One reason is that the error result for polynomial interpolation couldn’t be turned into a convergence result.

Using 10 data points, we had an error estimate in terms of the 10-th derivative. Then using 20 points, we

9

had another error estimate in terms of the 20-th derivative. These two quantities can’t be compared easily,

and for nonpolynomial functions, successively higher derivatives can easily get successively and endlessly

bigger in norm. The linear interpolation error bound involved a particular fixed derivative and held that

fixed, so then it was easy to drive the error to zero. because the other factors in the error term depended

only on interval size.

To evaluate a piecewise linear interpolant function at a point xval, we need to:

1. Determine the interval (xdata(left index),xdata(left index+1)) containing xval;

2. Determine the equation of the line which passes through the points

(xdata(left index),ydata(left index)) and (xdata(left index+1),ydata(left index+1));

3. Evaluate that line at xval.

The bracket() function from the previous exercise does the first of these tasks, but what of the second and

third?

13 Exercise 9

Write a code eval plin.py to evaluate piecewise linear interpolation, to do steps 2 and 3 above. Before you

begin, first write down a general formula for the linear function y = f(x) which passes through points (x0, y0)

and (x1, y1).

1. Write a file called eval plin.py starting with the signature

def e v a l p l i n ( xdata , ydata , x v al ) :

””” y v a l = e v a l p l i n ( xda ta , yda ta , x v a l )

. . . comments

”””

l e f t i n d i c e s = ?

y v al = ? ? ?

return y v al

2. Use bracket() to find the vector left indices.

3. Using data at left indices and left indices+1, evaluate yval, using the general formula for the

linear interpolant.

4. Write a code exercise9.py which tests eval plin() on the piecewise linear function f(x) = 3|x| + 1 on

the interval [−2, 2] using the following sequence of commands.

xdata = np . l i n s p a c e ( −1.0 , 1 . 0 , 7 )

ydata = 3. 0 ∗ abs ( xdata ) + 1. 0

x v al = np . l i n s p a c e ( −2.0 , 2 . 0 , 4001 )

y v al = e v a l p l i n ( xdata , ydata , x v al )

p l t . pl o t ( xdata , ydata , ’ ∗ ’ )

p l t . pl o t ( xval , y v al )

This test procedure is very similar to the test poly interpolate() function we wrote earlier. Your

plot should pass through the five data points and consist of two straight lines forming a “V”. Please

send me a copy of your plot.

A piecewise linear function was chosen for testing in the last part of this exercise for both theoretical and

practical reasons. You are fitting a piecewise linear function to a function that is already piecewise linear

and both the original and the fitted functions have breaks at x = 0, so the two functions will agree. This

makes it easy to see that the interpolation results are correct.

10

14 Exercise 10

You will examine convergence of piecewise linear interpolants to the Runge function we have considered

before.

1. Copy the file test poly interpolate.py to a file called test plin interpolate.py. Change the signature to

be

def t e s t p l i n i n t e r p o l a t e ( func , xdata ) :

””” e r r = t e s t p l i n i n t e r p o l a t e ( func , x d a t a )

. . . comments

”””

e r r = ? ? ?

return e r r

and change from using polynomial interpolation with eval lag() to piecewise linear interpolation with

eval plin().

2. Consider the same Runge function we used above. Over the interval [-5,5], use ndata=5 equally spaced

data points, use test plin interpolate() to plot the interpolant and evaluate the interpolation error.

Please send me this plot.

3. Repeat the evaluation for progressively finer meshes and fill in the following table. You do not need

to send me the plots that are generated. Warning: The final few of these tests should take a few

seconds, but if you have written code that happens to be inefficient they can take much longer. You

will not be graded on the efficiency of your code.

Runge function, Piecewise linear, uniformly-spaced points, [-5,+5]

ndata error

5 ________

11 ________

21 ________

41 ________

81 ________

161 ________

321 ________

641 ________

15 Error rates

The errors in the table decrease as ndata increases, as they should. But decreasing error is only half the

story: we are interested in the rate that the errors decrease. You know from the lectures that, generally,

errors tend to be bounded by Chp where C is a constant, often related to a derivative of the function being

interpolated, h is approximately proportional to the subinterval size (in the uniform case, h = 1/ndata), and

p is an integer, often a small integer.

One way to estimate the rate errors decrease is to plot log(error) versus log h. From the general bound,

log(error) ≈ log C + p log h for small enough h, so log(error) versus log h should yield a curve that becomes

straight as h becomes small. Furthermore, plotting log(error) versus log h is the same thing as plotting error

versus h as a log-log plot.

It is possible to visually estimate the rate of error decrease from a log-log plot. To do so, simply plot

lines with known slope q until you see one that is roughly parallel to the error plot. You will try this in the

following exercise. You could also pick points on the plot and estimate the slope of the line directly, but it

is easier to use visual comparison.

11

Another way to estimate the rate of error decrease is to successively halve the subinterval lengths. You

may note that the ndata values chosen above result in subintervals halving in length. If the error is roughly

proportional to Chp

, halving h should result in reducing the error by (1/2)p

. Thus, |Max Error(321)|/|Max

Error(641)| should be roughly 2p

for some integer p.

16 Exercise 11

This exercise investigates the behavior of the errors in the table in the previous exercise.

1. Create a file exercise11.py which will be used to analyze the relationship between interval size h =

10/(ndata − 1) and maximum error e, as computed in the previous exercise.

2. Compute the ratios e(5)/e(11), e(11)/e(21), e(21)/e(41), and so on. Do these ratios appear to

approach 2p

for some integer p? If so, what is your estimate of the value of p?

3. Use the plt.loglog() function to create a log-log plot of the values of h (x axis) versus e (y axis).

Your points should roughly fall along a straight line on the plot. Specify that the line is to drawn in

red, so your command might be

p l t . pl o t ( hdata , edata , ’ r−’ )

4. As part of this plot, plot three additional lines, and specify the colors of the lines:

• y = h, ’c-’ (cyan line)

• y = h

2

, ’g-’ (green line)

• y = h

3

, ’m-’ (magenta line)

5. Which of the three additional lines most closely matches the slope of your error plot, particularly where

h is the smallest? Does this correspond to the error analysis discussed earlier?

12