Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
ASSIGNMENT 2
For each part of each question below, state whether or not the answer given by ChatGPT is
correct or incorrect. To be correct, both the final answer and all workings/reasoning used
to obtain it must be provided and correct. Otherwise the answer is incorrect.
If you decide that ChatGPT’s answer is correct, simply write ”Correct”. For example, if you
think question 1 part (a) is correct, write
1.(a) Correct.
If you decide that ChatGPT’s answer is incorrect, identify the exact location where
ChatGPT makes its first error and write 1-2 sentences to explain why. Then provide the
correct answer, including all workings/reasoning. For example, if you think question 1 part
(a) is incorrect, write
1.(a) Incorrect. In first equality (E[X] =
∫∞
−∞ xFX(x)dx), ChatGPT’s answer incorrectly
uses the distribution function FX(x) instead of the density function fX(x). From that point,
we obtain the correct answer as follows:
E[X] =
∫ ∞
−∞
xfX(x)dx =
∫ 1
0
x× 1dx = [x2/2]10 = 1/2− 0 = 1/2.
To submit your assignment you should scan/photograph your answers and then upload to
Blackboard using the link provided. Answers may be handwritten or typed. Please make sure
that your answers are legible prior to submission.
1. Suppose that we have a random sample X1, X2, ..., Xn from with mean µ and variance σ
2
and let n > 5,m = n− 5. Given the estimators
µ̂n =
1
n
n∑
i=1
Xi, µ̂m =
1
m
m∑
i=1
Xi,
we have
n(µ̂n − µ)2
σ2
+
m(µ̂m − µ)2
σ2
d−→ χ22.
True or false? Why? (10 marks)
This statement is false.
To see why, let’s first consider the fact that the sum of squares of independent standard
normal random variables follows a chi-squared distribution with the degrees of freedom
equal to the number of variables being summed.
Given the estimators:
µˆn =
1
n
n∑
i=1
Xi, µˆm =
1
m
m∑
i=1
Xi
We want to evaluate the expression:
n(µˆn − µ)2
σ2
+
m(µˆm − µ)2
σ2
Notice that this expression resembles the sum of two terms, each representing the square
of the difference between the sample mean and the population mean, scaled by n and m
respectively, and then normalized by the variance σ2.
However, the issue here is that µˆn and µˆm are not independent, as they share some
common observations. Thus, we cannot directly apply the result for sums of squares of
independent standard normal random variables.
Therefore, the expression does not converge in distribution to a chi-squared distribution
with 2 degrees of freedom (χ22).
In summary, the statement is false because the provided expression does not have the
convergence property stated.
2. Given a random sample X1, X2, ..., Xn from a uniform distribution with mean µ and
variance σ2 and
X =
1
n
n∑
i=1
Xi,
we have 2
√
n(X − µ) d−→ N (0, 4σ2). True or false? Why? (10 marks)
This statement is true.
Given a random sample X1, X2, . . . , Xn from a uniform distribution with mean µ and
variance σ2, the central limit theorem (CLT) tells us that the sample mean X converges
in distribution to a normal distribution as n approaches infinity.
Specifically, we have:
√
n(X − µ) d−→ N (0, σ2)
Multiplying both sides by 2 yields:
2
√
n(X − µ) d−→ N (0, 4σ2)
Therefore, the statement is true. The sample mean X converges in distribution to a
normal distribution with mean 0 and variance 4σ2, as the sample size n grows.
3. Consider the regression model
Yi = β1 + β2Xi + Ui, E[Ui|Xi] = 0, E[U2i |Xi] = σ2
and suppose that you have a bivariate random sample of (X1, Y1), (X2, Y2), ..., (Xn, Yn).
(a) The OLS estimator of β2 is
βˆ2 =
∑
i(Xi −X)(Yi − Y )∑
i(Xi −X)2
.
Let
β˜2 =
∑
i(Xi −X)Yi∑
i(Xi −X)2
.
Show that βˆ2 = β˜2 (3 marks).
To show that βˆ2 = β˜2, we’ll start by expressing both estimators and then show they
are equivalent.
1. **OLS Estimator βˆ2:
βˆ2 =
∑
i(Xi −X)(Yi − Y )∑
i(Xi −X)2
2. **Alternative Estimator β˜2:
β˜2 =
∑
i(Xi −X)Yi∑
i(Xi −X)2
Now, let’s manipulate β˜2 to match βˆ2:
β˜2 =
∑
i(Xi −X)Yi∑
i(Xi −X)2
=
∑
i(Xi −X)(Yi − Y + Y )∑
i(Xi −X)2
(adding and subtracting Y )
=
∑
i(Xi −X)(Yi − Y ) +
∑
i(Xi −X)Y∑
i(Xi −X)2
=
∑
i(Xi −X)(Yi − Y ) + Y
∑
i(Xi −X)∑
i(Xi −X)2
(pulling out Y )
=
∑
i(Xi −X)(Yi − Y )∑
i(Xi −X)2
+
Y
∑
i(Xi −X)∑
i(Xi −X)2
=
∑
i(Xi −X)(Yi − Y )∑
i(Xi −X)2
+ Y
∑
i(Xi −X)∑
i(Xi −X)2
(dividing by
∑
i
(Xi −X)2)
=
∑
i(Xi −X)(Yi − Y )∑
i(Xi −X)2
+ Y
0∑
i(Xi −X)2
(since
∑
i
(Xi −X) = 0)
=
∑
i(Xi −X)(Yi − Y )∑
i(Xi −X)2
Hence, we have shown that β˜2 = βˆ2. Therefore, both estimators are equivalent.
(b) Show that βˆ1 is BLUE (12 marks)
To show that βˆ1 is the Best Linear Unbiased Estimator (BLUE), we need to demon-
strate two properties:
1. **Linearity**: βˆ1 is a linear function of the observed values Yi. 2. **Unbiased-
ness**: βˆ1 has an expected value equal to the true parameter β1.
Let’s address each property:
1. Linearity: The OLS estimator βˆ1 is given by:
βˆ1 = Y − βˆ2X
Both Y and βˆ2 are linear functions of the observed values Yi and Xi respectively.
Therefore, βˆ1 is also a linear function of the observed values Yi, satisfying the linearity
property.
2. Unbiasedness: We want to show that E[βˆ1] = β1.