Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
ECMT6006/ECON4949/ECON4998 Mid-semester Test Suggested Solutions
Problem 1. [12 marks] Let Pt be the price of a stock on date t, and assume the stock pays no dividend.
Let Rt+1 and rt+1 be, respectively, the simple gross return and log return of this stock from time t to t + 1.
Answer the following questions. (i) Show that a forecast of Pt+1, denoted as Pˆt+1 = Et(Pt+1), can be derived
from a forecast of Rt+1. [1 mark] Solution. The forecast of Rt+1 is given as Et(Rt+1) = Et ( Pt+1 Pt ) = Et(Pt+1) Pt = Pˆt+1
Pt where the second equality holds due to the conditional constancy. There- fore, the forecast of the price Pˆt+1
can be obtained from the forecast of Rt+1 as Pˆt+1 = PtEt(Rt+1). (ii) Let Rt→t+k be the k-period gross return.
Show that Rt→t+k can be written as a function of 1-period gross returns. [1 mark] Solution. The k-period gross
return is Rt→t+k = Pt+k Pt = Pt+k Pt+k−1 Pt+k−1 Pt+k−2 · · · Pt+1 Pt = k∏ j=1 Pt+j Pt+j−1 = k∏ j=1 Rt+j (1)
where Rt+j , j = 1, . . . , k are 1-period gross returns. (iii) Let rt→t+k be the k-period log return defined as rt→t+k = ln(Rt→t+k). Show that rt→t+k can be written as a function of 1-period log returns. [1 mark] Solution. Taking log on both sides of (1) yields rt→t+k = ln(Rt→t+k) = k∑ j=1 ln(Rt+j) = k∑ j=1 rt+j , where rt+j , j = 1, . . . , k are 1-period log returns. (iv) When the value of the log return is close to that of the arithmetic net return, and why? [1 mark] Solution. Let rat+1 be the arithmetic net return from time t to t + 1. We have rt+1 = ln(Rt+1) = ln(1 + r a t+1) ≈ rat+1 when rat+1 is small. This is due to the Taylor series expansion ln(1 + x) = x− x 2 2 + x3 3 − · · · , where the higher order terms are dominated by the first term x when x is small. 1 (v) Explain how to conduct a robust joint test for the autocorrelation in the log return rt+1 using a regression-based approach. Be explicit about the regression you run, the test statistic you construct, and how you make testing decisions. [3 marks] Solution. The robust test is conducted by running a linear regression of daily log returns rt on its L lagged values: rt = β0 + β1rt−1 + · · ·+ βLrt−L + εt, (2) and then testing the below joint null hypothesis H0 : β1 = β2 = · · · = βL = 0 versus the alternative hypothesis H1 : βj 6= 0 for some j = 1, . . . , L. To carry out such a test, we may consider a Wald test statistic W = T (Rβˆ − b)′(RVˆ R)−1(Rβˆ − b) where βˆ is the OLS estimate of β := (β0, . . . , βL) ′, b is the L-dimensional zero vector, R is the L× (L+ 1) restriction matrix such that R = 0 1 0 · · · 0 0 0 1 · · · 0 ... ... ... · · · ... 0 0 0 · · · 1 . Moreover, Vˆ /T is the estimator for the asymptotic variance of βˆ and should be constructed from the robust standard errors, e.g., White or Newey-West standard errors, to accommodate the potential heteroskedasticity and/or autocorrelation in the data. Under the null hypothesis H0, we expect our test statistic W →d χ2L as T → ∞, where χ2L denotes the chi-squared distribution with degree of freedom L. Therefore, the testing decision can be made by comparing the test statistic with the critical value obtained from the χ2L distribution. (vi) Suppose you could not find significant autocorrelation in rt+1 by the above test. What can you say about the predictability of this stock return? [2 marks] Solution. This means that we could not use a linear regression as in (2) to predict the future stock returns in the mean level using the past returns. [1 mark] However, we might be able to predict the variance of the stock return by exploring the dependence in the higher moments in the return distribution, such as using ARCH/GARCH volatility models. [1 mark] (vii) The below table shows the summary statistics of daily returns of S&P500 index from 1999 to 2009 and daily returns of US 3-month T-bill rates from 1989 to 2009. Please interpret the table and compare these two return series. [3 marks] Solution. From the summary statistics of daily returns on S&P 500 index and T-bill rates, we can see that 2 (i) The stock index has a higher mean return and also higher return standard deviation than the interest rates. (ii) The skewness of the stock index returns is negative, meaning that the distribution of the returns is left skewed, or has a longer left tail. Whereas the skewness of the interest rate returns is positive which implies a longer right tail for the return distribution. Therefore, the stock index returns are likely to have more extreme negative realizations than the interest rate returns. (iii) We can see from the minimum and maximum of both returns that the stock index returns are much more spread out. Also, the most adverse realization of the stock return is −22.90 which has a much greater magnitude than its most favorable realization 10.96. Obviously, the distribution of stock index returns is asymmetric; (iv) Both returns show excessive excess kurtosis than normal, with an indication of heavy tails of their distributions. In fact, the Jarque- Bera tests for both series have very close to zero p-values, suggesting that the return distributions are far from being normal. Problem 2. [8 marks] Consider the following weakly stationary AR(1)-ARCH(1) model for stock returns Rt = φ0 + φ1Rt−1 + εt (3) where εt|Ft−1 ∼ N(0, σ2t ), σ2t = ω + αε 2 t−1. Assume |φ1| < 1, ω > 0, α ≥ 0. Answer the following. (i) Show that {εt} is a white noise process. [2 marks] Solution. Since εt|Ft−1 ∼ N(0, σ2t ), we have Et−1(εt) = 0. We can use this property to show that {εt} is a white noise process which has no serial correlation. First, by the law of iterated expectation (LIE), E(εt) = E(Et−1(εt)) = 0. That is, the unconditional mean of εt is zero for any t. This implies that to show Cov(εt+k, εt) = 0 for any k, it suffices to show that E(εt+kεt) = 0. Now note that for k = 1, E(εt+1εt) = E(Et(εt+1εt)) = E(εtEt(εt+1) = E(εt · 0) = 0. 3 And for k = 2, by using LIE repeatedly we have E(εt+2εt) = E(Et(εt+2εt)) = E(εtEt(εt+2)) = E(εtEt(Et+1(εt+2))) = E(εt · 0) = 0. By the same logic, E(εt+kεt) = 0 for any nonzero integer k. This completes the proof. (ii) Show that {ε2t} is an AR(1) process. What restriction should we make on α to guarantee that {ε2t} is weakly stationary? [2 marks] Solution. To show {ε2t} is an AR(1) process, we decompose ε2t = Et−1(ε 2 t ) + ηt where ηt := ε 2 t −Et−1(ε2t ) is a martingale difference sequence and hence a while noise process. Moreover, since Et−1(ε2t ) = Vart−1(εt) + [Et−1(εt)] 2 = σ2t + 0 = ω + αε 2 t−1, we can immediately deduce that ε2t = ω + αε 2 t−1 + ηt where ηt is a while noise process. This means ε 2 t is an AR(1) process. We need α < 1 to guarantee that {ε2t} is weakly stationary. (iii) What are the conditional mean Et(Rt+1) and unconditional mean E(Rt)? [2 marks] Solution. We have Et(Rt+1) = Et(φ0 + φ1Rt + εt+1) = φ0 + φ1Rt, since Et(εt+1) = 0. As for the unconditional mean, taking expectation on both sides of the above equation to get E(Rt+1) = φ0 + φ1E(Rt) and then by stationarity we have E(Rt+1) = E(Rt), we can deduce that E(Rt) = φ0 1−φ1 . (iv) What are the conditional variance Vart(Rt+1) and unconditional variance Var(Rt)? [2 marks] Solution. Note that Vart(Rt+1) = Et [ (Rt+1 − Et(Rt+1))2 ] = Et(ε 2 t+1) = σ 2 t+1 = ω + αε 2 t . Next, denote the unconditional variance σ2r := Var(Rt) for all t. Given {Rt} is weakly stationary, taking variance on both sides of (3) yields σ2r = Var(φ0 + φ1Rt−1 + εt) = φ21Var(Rt−1) + Var(εt) + 2φ1Cov(Rt−1, εt) = φ2σ2r + ω 1− α + 0 (4) 4 where Cov(Rt−1, εt) = 0 because Et−1(εt) = 0 implies that εt is uncorre- lated with all the variables in Ft−1 including Rt−1. Then solving equation (4) yields σ2r = ω/(1− α) 1− φ2 = ω (1− α)(1− φ2) . Problem 3. [15 marks] Consider a two-period model for returns Rt, t = 1, 2 of an asset. Let ε0 = 1, and Rt = µ+ εt, εt = σtνt, σt = |εt−1|, where µ = 1, and ν1, ν2 are independent and identically distributed as νt = { 1, with probability 2/3 −2, with probability 1/3 for t = 1, 2. Let Ft be the information set available at time t. Please answer the following questions. (i) What is the probability distribution of R1? [2 marks] Solution. Note that R1 = µ+ ε1 = µ+ σ1ν1 = 1 + ν1, where we used the fact that µ = 1 and σ1 = |ε0| = 1. Given the distri- bution of ν1 specified in the question, we can deduce that the probability distribution of R1 is simply R1 = { 2, with probability 2/3 (when ν1 = 1), −1, with probability 1/3 (when ν1 = −2). (ii) What is the probability distribution of R2? [2 marks] Solution. Note that R2 = µ+ ε2 = µ+ σ2ν2 = 1 + |ε1|ν2 = 1 + |σ1ν1|ν2,= 1 + |ν1|ν2, where again we used the fact that µ = 1 and σ1 = |ε0| = 1. Given the distribution of ν1 and ν2 specified in the question, we can deduce that the probability distribution of R2 is R2 = 1 + |1| × 1 = 2, with probability 2/3× 2/3 = 4/9, 1 + |1| × (−2) = −1, with probability 2/3× 1/3 = 2/9, 1 + | − 2| × 1 = 3, with probability 1/3× 2/3 = 2/9, 1 + | − 2| × (−2) = −3, with probability 1/3× 1/3 = 1/9. (iii) Compute the conditional mean E1(R2) := E(R2|F1). [2 marks] Solution. Note that F1 contains the information of the realization of R1. If R1 = 2, then it means ν1 = 1 (with probability 2/3); and if R1 = −1, then it means ν1 = −2 (with probability 1/3). Note that conditional on R1 = 2, R2|{R1 = 2} = { 1 + |1| × 1 = 2, with conditional probability 2/3, 1 + |1| × (−2) = −1, with conditional probability 1/3. 5 So E (R2|{R1 = 2}) = 2× 2/3− 1× 1/3 = 1. Similary, R2|{R1 = −1} = { 1 + | − 2| × 1 = 3, with conditional probability 2/3, 1 + | − 2| × (−2) = −3, with conditional probability 1/3. So E (R2|{R1 = −1}) = 3× 2/3− 3× 1/3 = 1 also. Therefore, the condi- tional mean of R2 given F1 is E1(R2) = E(R2|F1) = 1 with probability 1. (It is okay if the student writes E1(R2) = 1 with specifying the probability.) (iv) Compute the conditional variance Var1(R2) = Var(R2|F1). [2 marks] Solution. Using the conditional distribution of R2 given F1 in (iii), we can compute the conditional variance as follows: Var(R2|{R1 = 2}) = 2 3 (2− 1)2 + 1 3 (−1− 1)2 = 2 3 + 4 3 = 2, Var(R2|{R1 = −1}) = 2 3 (3− 1)2 + 1 3 (−3− 1)2 = 8 3 + 16 3 = 8, which implies Var1(R2) = Var(R2|F1) = { 2, with probability 2/3 8, with probability 1/3. (v) Verify the law of iterated expectation E(R2) = E [E1(R2)] using numbers given in this problem. [2 marks] Solution. Using the unconditional distribution of R2 derived in (ii), we can directly compute the unconditional expectation as E(R2) = 2× 4 9 − 1× 2 9 + 3× 2 9 − 3× 1 9 = 1. By the result in (iii), we know that E1(R2) = 1 with probability 1 from which we can deduce that E[E1(R2)] = 1. Therefore, E(R2) = E [E1(R2)]. (vi) What are the one-period ahead return point forecasts Rˆt for t = 1, 2? [2 marks] Solution.
The one-period ahead return point forecasts are given by the conditional expectations. Therefore,
we have Rˆ1 = E(R1|F0) = 2× 2/3− 1× 1/3 = 1, Rˆ2 = E(R2|F1) = 1. (vii) What are the two standard deviation
one-period ahead return interval forecasts for t = 1, 2? [3 marks] Solution. The two standard deviation
one-period ahead interval forecasts are Rˆt ± 2 √ Vart−1(Rt) for t = 1, 2. For t = 1, Rˆ1 = 1 and Var0(R1) =
2 3 × (2− 1)2 + 1 3 × (−1− 1)2 = 2. 6 So the interval forecast for R1 is[ 1− 2 √ 2, 1 + 2 √ 2 ] . As for t = 2, Rˆ2 =
1 for both R1 = 2 or R1 = −1. But as what is shown in (iv), Var1(R2) = 2 if R1 = 2 and Var1(R2) = 8 if R1 = −1.
So the one-step ahead interval forecast for R2 is[ 1− 2 √ 2, 1 + 2 √ 2 ] if R1 = 2,[ 1− 2 √ 8, 1 + 2 √ 8 ] if R1 = −1. 7