Answer:
Step-by-step explanation:
Given that:
[tex]E( \hat \theta _1) = \theta \ \ \ \ E( \hat \theta _2) = \theta \ \ \ \ V( \hat \theta _1) = \sigma_1^2 \ \ \ \ V(\hat \theta_2) = \sigma_2^2[/tex]
If we are to consider the estimator [tex]\hat \theta _3 = a \hat \theta_1 + (1-a) \hat \theta_2[/tex]
a. Then, for [tex]\hat \theta_3[/tex] to be an unbiased estimator ; Then:
[tex]E ( \hat \theta_3) = E ( a \hat \theta_1+ (1-a) \hat \theta_2)[/tex]
[tex]E ( \hat \theta_3) = aE ( \theta_1) + (1-a) E ( \hat \theta_2)[/tex]
[tex]E ( \hat \theta_3) = a \theta + (1-a) \theta = \theta[/tex]
b) If [tex]\hat \theta _1 \ \ and \ \ \hat \theta_2[/tex] are independent
[tex]V(\hat \theta _3) = V (a \hat \theta_1+ (1-a) \hat \theta_2)[/tex]
[tex]V(\hat \theta _3) = a ^2 V ( \hat \theta_1) + (1-a)^2 V ( \hat \theta_2)[/tex]
Thus; in order to minimize the variance of [tex]\hat \theta_3[/tex] ; then constant a can be determined as :
[tex]V( \hat \theta_3) = a^2 \sigma_1^2 + (1-a)^2 \sigma^2_2[/tex]
Using differentiation:
[tex]\dfrac{d}{da}(V \ \hat \theta_3) = 0 \implies 2a \ \sigma_1^2 + 2(1-a)(-1) \sigma_2^2 = 0[/tex]
⇒
[tex]a (\sigma_1^2 + \sigma_2^2) = \sigma^2_2[/tex]
[tex]\hat a = \dfrac{\sigma^2_2}{\sigma^2_1+\sigma^2_2}[/tex]
This implies that
[tex]\dfrac{d}{da}(V \ \hat \theta_3)|_{a = \hat a} = 2 \ \sigma_1^2 + 2 \ \sigma_2^2 > 0[/tex]
So, [tex]V( \hat \theta_3)[/tex] is minimum when [tex]\hat a = \dfrac{\sigma_2^2}{\sigma_1^2+\sigma_2^2}[/tex]
As such; [tex]a = \dfrac{1}{2}[/tex] if [tex]\sigma_1^2 \ \ = \ \ \sigma_2^2[/tex]