## Description

1. (Modified from problem 6.23(a)). Let X1, . . . , Xn be a random sample from a uniform distribution

on the interval (θ, 2θ), θ > 0. That is, the Xi

’s are iid with pdf fθ(x) = 1

θ

1{θ < x < 2θ} for θ > 0.

(a) Find a minimal sufficient statistic for θ.

(b) Is the minimal sufficient statistic in part (a) complete? Justify your answers.

2. (Modified from Ex 6.5 of our text, also see Problem 4 of HW#6). Assume that X1, · · · , Xn are

independent random variables with pdfs

f(xi

|θ) = 1

3iθ , if −i(θ − 1) < xi < i(2θ + 1);

0, otherwise, for i = 1, 2, 3, . . .

where θ > 0. Let T(X) be the (one-dimensional) minimal sufficient statistic for θ you found in HW#6,

also see the solution set of Problem 4 of HW#6. Is this minimal sufficient statistic T(X) complete?

Justify your answers.

3. (Modified from problem 7.37 of our text). Let X1, . . . , Xn be a random sample from a uniform

distribution on the interval (−θ, 2θ), θ > 0. That is, the Xi

’s are iid with pdf fθ(x) = 1

3θ

1{−θ < x < 2θ}

for θ > 0.

(a) Find a minimal sufficient statistic for θ.

(b) Is the minimal sufficient statistic in part (a) complete? Justify your answers.

4. (6.20(b)-(d)). For each of the following pdfs let X1, . . . Xn be iid observations. Find a complete

sufficient statistic, or show that one does not exist. For part (b)-(d), please feel free to use Theorem

6.2.25 on page 288 of our text.

(b) fθ(x) = θ

(1+x)

1+θ , 0 < x < ∞, θ > 0

(c) fθ(x) = (log θ)θ

x

θ−1

, 0 < x < 1, θ > 1

(d) fθ(x) = e

−(x−θ)

exp(−e

−(x−θ)

), −∞ < x < ∞, −∞ < θ < ∞

5. (Motivated from problems 6.30 and 7.55(b) of our text). Let X1, . . . , Xn be a random sample

from the pdf fθ(x) = e

−(x−θ)

for x > θ, where −∞ < θ < ∞.

(a) Show that X(1) = mini Xi

is a complete sufficient statistic.

(b) Use Basu’s Theorem to show that X(1) and S

2 are independent. Recall that S

2 =

Pn

i=1(Xi−X¯n)

2

n−1

.

6. (Modified from problem 6.3 of our text) Let X1, . . . , Xn be a random sample from the pdf

f(x|µ, σ) = 1

σ

e

−(x−µ)/σ, µ < x < ∞, 0 < σ < ∞.

Find a (two-dimensional) minimal sufficient statistic T(X) = (T1, T2) for (µ, σ) such that T1 =

T1(X1, · · · , Xn) and T2 = T2(X1, · · · , Xn) are independent.

1

Hints: If you have already thought about each problem for at least 30 minutes, then please feel free to

look at the hints. Otherwise, please try the problem first, as getting help from the hints takes away most of

the fun.

Problem 1: Can you find two constants C1 and C2 (they might depend on n, but not on θ) such that

Eθ(

1

C1

X(1) −

1

C2

X(n)) = 0

for all θ?

Problem 2: To check completeness of T(X), derive its distribution by noting that

Pθ(T(X) ≤ t) = Pθ

max

1 − min

1≤i≤n

xi

i

,

1

2

( max

1≤i≤n

xi

i

− 1)

≤ t

= Pθ

1 − min

1≤i≤n

xi

i

≤ t and 1

2

( max

1≤i≤n

xi

i

− 1 ≤ t

= Pθ

min

1≤i≤n

xi

i

≥ 1 − t and max

1≤i≤n

xi

i

≤ 2t + 1

= Pθ(1 − t ≤

xi

i

≤ 2t + 1 for all i = 1, · · · , n)

=

Yn

i=1

Pθ

− (t − 1) ≤

Xi

i

≤ 2t + 1)

=

Yn

i=1

Pθ

− i(t − 1) ≤ Xi ≤ i(2t + 1)

for t > 0. What happens if 0 ≤ t ≤ θ? How about if t < 0 or if t > θ? Do you see any connections with the problem

in which X1, . . . , Xn are iid Uniform(0, θ)?

Problem 3: this problem is very different from Problem #1, as the minimal sufficient statistic turns out to be

one-dimensional as in Problem #2!!! When proving the completeness, you need to first its probability density function as in problem #2.

Problem 4: For part (b)-(d), please feel free to use Theorem 6.2.25 on page 288 of our text to find the complete sufficient statistic.

Problem 5: In part (a), what is the distribution of T = X(1)? In (b), use Basu’s theorem.

Problem 6: First, find a minimal sufficient statistic and thus any one-to-one function is also minimal sufficient.

Second, you can guess the desired T1 and T2 by assuming for a moment that parameter σ is fixed and known, and

by finding a complete sufficient statistic and an ancillary statistic for µ (note that they should be the one-to-one

function of the minimal sufficient statistic you have found).

Alternatively, T1 and T2 can be used to construct reasonable estimates (read: maximum likelihood estimator) of

µ and σ, respectively.

Third, assume, for a moment, that σ is known, and you can let Zi = Xi/σ, and use Basu’s theorem to show that

your proposed T1 and T2 are independent. Since this holds for any σ > 0, you can conclude that this independence

carries over even if σ is unknown, as knowledge of σ has no bearing on the distributions. Also see problem 6.31 of

our text for more applications of Basu’s theorem.