Suppose that E(θˆ1) = E(θˆ2) = θ, V(θˆ 1) = σ2 1 , and V(θˆ2) = σ2 2 . Consider the estimator θˆ 3 = aθˆ 1 + (1 − a)θˆ 2. a Show that θˆ 3 is an unbiased estimator for θ. b If θˆ1 and θˆ2 are independent, how should the constant a be chosen in order to minimize the variance of θˆ3?

Respuesta :

Answer:

Step-by-step explanation:

Given that:

[tex]E( \hat \theta _1) = \theta \ \ \ \ E( \hat \theta _2) = \theta \ \ \ \ V( \hat \theta _1) = \sigma_1^2 \ \ \ \ V(\hat \theta_2) = \sigma_2^2[/tex]

If we are to consider the estimator [tex]\hat \theta _3 = a \hat \theta_1 + (1-a) \hat \theta_2[/tex]

a. Then, for  [tex]\hat \theta_3[/tex] to be an unbiased estimator ; Then:

[tex]E ( \hat \theta_3) = E ( a \hat \theta_1+ (1-a) \hat \theta_2)[/tex]

[tex]E ( \hat \theta_3) = aE ( \theta_1) + (1-a) E ( \hat \theta_2)[/tex]

[tex]E ( \hat \theta_3) = a \theta + (1-a) \theta = \theta[/tex]

b) If [tex]\hat \theta _1 \ \ and \ \ \hat \theta_2[/tex] are independent

[tex]V(\hat \theta _3) = V (a \hat \theta_1+ (1-a) \hat \theta_2)[/tex]

[tex]V(\hat \theta _3) = a ^2 V ( \hat \theta_1) + (1-a)^2 V ( \hat \theta_2)[/tex]

Thus; in order to minimize the variance of [tex]\hat \theta_3[/tex] ; then constant a can be determined as :

[tex]V( \hat \theta_3) = a^2 \sigma_1^2 + (1-a)^2 \sigma^2_2[/tex]

Using differentiation:

[tex]\dfrac{d}{da}(V \ \hat \theta_3) = 0 \implies 2a \ \sigma_1^2 + 2(1-a)(-1) \sigma_2^2 = 0[/tex]

[tex]a (\sigma_1^2 + \sigma_2^2) = \sigma^2_2[/tex]

[tex]\hat a = \dfrac{\sigma^2_2}{\sigma^2_1+\sigma^2_2}[/tex]

This implies that

[tex]\dfrac{d}{da}(V \ \hat \theta_3)|_{a = \hat a} = 2 \ \sigma_1^2 + 2 \ \sigma_2^2 > 0[/tex]

So, [tex]V( \hat \theta_3)[/tex] is minimum when [tex]\hat a = \dfrac{\sigma_2^2}{\sigma_1^2+\sigma_2^2}[/tex]

As such; [tex]a = \dfrac{1}{2}[/tex]       if   [tex]\sigma_1^2 \ \ = \ \ \sigma_2^2[/tex]