ISyE 6412A Theoretical Statistics HW 5 solution

$30.00

Original Work ?
Category: You will Instantly receive a download link for .ZIP solution file upon Payment

Description

5/5 - (1 vote)

1. Let Y1, . . . , Yn be an iid random sample from a population with unknown mean θ and a finite
variance σ
2 > 0. In the problem of estimating θ ∈ Ω = R
1
(real numbers) with D = R
1 and
L(θ, d) = (θ − d)
2
(squared error loss), consider a general statistical procedure of the form
δa,b = aY¯ + b, where a and b are general constants.

(a) Calculate the risk function of δa,b = aY¯ + b.

(b) Show that aY¯ + b is inadmissible whenever (i) a > 1; or (ii) a < 0; or (iii) a = 1, b ̸= 0.

2. In Problem 1, assume further that Yi ∼ N(θ, σ2
), where σ
2 > 0 is known. Show that aY¯ + b
is admissible if 0 ≤ a < 1.

Remark: Combining problem 1 and problem 2 studies the admissibility of aY¯ + b for the
normal distribution for all cases except Y¯ itself which will be shown to be admissible in class.
Therefore, under the normality assumption, the procedure δa,b = aY¯ +b is admissible if and
only if (i) 0 ≤ a < 1 or (ii) a = 1, b = 0.]

3. In Problem 1, assume further that Yi ∼ Bernoulli(θ), i.e., Pθ(Yi = 1) = 1 − Pθ(Yi = 0) =
θ. Assume that the statistician decides to restrict consideration to procedures of the form
δa,b = aY¯ + b, and want to always yield only decisions in D = [0, 1] regardless of the observed
value Y = (Y1, · · · , Yn). Show that the pair (a, b) has to be chosen to satisfy 0 ≤ b ≤ 1 and
−b ≤ a ≤ 1 − b.

4. In Problem 3 for Bernoulli distribution, when 0 < b < 1 and −b < a < 0, is the procedure
δa,b = aY¯ + b admissible? Note that the variance σ
2 = θ(1 − θ) depends on θ here.

5. In Problem 3 for Bernoulli distribution, show that when 0 < b < 1 and 0 ≤ a < 1 − b, the
procedure δa,b = aY¯ + b is admissible.

Remark: For the purpose of completeness, for Bernoulli distribution, it can be shown that
aY¯ +b is admissible in the closed triangle {(a, b) : a ≥ 0, b ≥ 0, a+b ≤ 1}, and it is inadmissible
for the remaining values of a and b.

6. Let Y1, · · · , Yn be i.i.d. according to a N(0, σ2
) density, and let S
2 =
Pn
i=1 Y
2
i
. We are
interested in estimating θ = σ
2 under the squared error loss L(θ, d) = (θ − d)
2 = (σ
2 − d)
2
using linear estimator δa,b = aS2 + b, where a and b are constants.

Show that
(a) The risk of δa,b is given by
Rδa,b (σ
2
) = Eσ

σ
2 − (aS2 + b)
2
= 2na2σ
4 + [(an − 1)σ
2 + b]
2
.

(b) The constant estimator δa=0,b=0 = 0 is inadmissible.
Remark: this exercise illustrates the fact the constants are not necessarily admissible.
1
Hints for problem 1 (b) Find a better procedure than δa,b = aY¯ + b : try δ1,0 = Y¯ for case (i)
(a > 1) or case (iii) (a = 1 and b ̸= 0); and try some constant estimators for case (ii) (a < 0).

In particular, when a < 0, we have 1 − a > 1, and thus
h
(a − 1)θ + b]
2 =
h
(1 − a)θ − b]
2 = (1 − a)
2
h
θ −
b
1 − a
]
2 ≥
h
θ −
b
1 − a
]
2
.
From this, can you guess the desired constant estimator?

Hints for problem 2 Show that δa,b is a Bayes procedure if 0 < a < 1, and can you find µ, τ 2
(in term of a, b) so that δa,b is Bayes with respect to the prior distribution θ ∼ N(µ, τ 2
)?
Meanwhile, when a = 0, note that δa=0,b is the only estimator with zero risk at θ = b, and
have we done similar questions in part (h) of HW#1?

Hints for problem 3 it suffices to make sure that δa,b ∈ [0, 1] when (Y1, · · · , Yn) = (0, · · · , 0)
or (1, · · · , 1), why? Hints: where does the linear function achieve the minimum or maximum
values?
Hints for problem 4 Here the variance σ
2 = θ(1−θ), and it is a special case of problem #1(b).

Hints for problem 5 If a = 0, what is risk at θ = b? If 0 < a < 1 − b, what is the Bayes
solution relative to the prior distribution π(θ) = Beta(α, β) with α > 0 and β > 0?
Hints for problem 6 in part (a), let Yi = σZi
, where Zi ∼ N(0, 1). For the standard normal
distributions, we have
E(Zi) = 0, E(Z
2
i
) = 1, E(Z
3
i
) = 0, E(Z
4
i
) = 3.
From this, can you find the mean and variance of Wi = Y
2
i
? The question can be reduced
to the linear estimator of δa,b = a
Pn
i=1 Wi + b when estimating θ = Eθ(Wi) = σ
2 under the
squared error loss function.
In part (b), let b = 0, find a that minimizes the risk function, and such a will yield a better
procedure.