The Southern Ring Nebula, as captured by the James Webb Space Telescope (JWST). NASA, ESA, CSA, AND STSCI
Generalized Least Squares (GLS)
Black-Litterman (BL) model is based on Theil mixed estimator which is a kind of the GLS estimator.
Since BL model combines two regressions (market + views) which have different variance terms respectively, the conbined BL regression model has two different variance terms naturally. Unlike OLS with a constant (homogenous) variance term, when variance terms are not constant (heterogeneous), it is natural to apply the GLS estimator to get estimated parameters.
The starting point is a linear model.
Linear model
\[\begin{align} y &= X \beta + u, \quad u \sim N(0,\Sigma)\\ \\ \text{E}[u|X] &= 0 \\ \text{Var}[u|X] & = \text{E}[u u^{\top}|X] = \Sigma \text{ : determined below} \end{align}\]OLS estimator
As we are familar, OLS estimator has the following form with a constant variance term across all observations.
\[\begin{align} \Sigma = \sigma^2 I_n \end{align}\] Hance, OLS estimator is of the following formula.
\[\begin{align} \hat{\beta}_{OLS} &= (X^{\top} X)^{-1}X^{\top} y \\ Var(\hat{\beta}_{OLS}) &= (X^{\top} X)^{-1}\sigma^2 \end{align}\]
GLS estimator
Our focus in on the GLS estimator for the linear regression model with non-constant variance terms (\(\Sigma = \sigma^2 \Omega\)) across all observations, which means that residulas are heteroscedastic and/or serially dependent.
\[\begin{align} \Sigma = \sigma^2 \Omega \end{align}\]
GLS estimator is of the following formula.
\[\begin{align} \hat{\beta}_{GLS} &= (X^{\top} \color{red}{\Omega^{-1}} X)^{-1}X^{\top} \color{red}{\Omega^{-1}} y \\ Var(\hat{\beta}_{GLS}) &= (X^{\top} \color{red}{\Omega^{-1}} X)^{-1}\sigma^2 \end{align}\] or \[\begin{align} \hat{\beta}_{GLS} &= (X^{\top} \color{red}{\Sigma^{-1}} X)^{-1}X^{\top} \color{red}{\Sigma^{-1}} y \\ Var(\hat{\beta}_{GLS}) &= (X^{\top} \color{red}{\Sigma^{-1}} X)^{-1} \end{align}\]
Derivation of GLS estimator
When \(\Omega\) is symmetric, using eigen decomposition, \(\Omega\) can be expressed as follows, \[\begin{align} \Omega &= A^{\top} \Lambda A = A^{\top} \Lambda^{1/2} {\Lambda^{1/2}}^{\top} A \\ &= A^{\top} \Lambda^{1/2} (A^{\top} \Lambda^{1/2})^{\top} = PP^{\top} \\\\ \rightarrow P &= A^{\top} \Lambda^{1/2} \end{align}\] Here, \(A\) and \(\Lambda\) are eigenvector and eigenvalue matrix respectively.
Now \(\Omega\) can be transformed into the identity matrix (\(I_n\)) by multiplying \(P^{-1}\) in both sides \[\begin{align} P^{-1} \Omega {P^{\top}}^{-1} = I_n \end{align}\] Multiplying \(P^{-1} \) on both sides of \(y = \beta X + u\) results in
\[\begin{align} P^{-1}y &= P^{-1} X \beta + P^{-1} u\\ \rightarrow y^* &= X^* \beta + u^*\\ \\ \text{E}[u^*|X] &= P^{-1} \text{E}[ u |X]= 0 \\ \text{Var}[u^*|X] & = \text{Var}[P^{-1} u|X] \\ & = P^{-1} \text{Var}[u|X] {P^{\top}}^{-1} \\ & = P^{-1} \sigma^2 \Omega {P^{\top}}^{-1} \\ & = \sigma^2 P^{-1} \Omega {P^{\top}}^{-1} \\ & = \sigma^2 I_n \\ \end{align}\] Therefore, the linear regression model above is rewritten as \[\begin{align} y^* &= X^* \beta + u^*, \quad u^* \sim N(0,\sigma^2 I_n) \end{align}\] Least squares as the minimization problem is implemented as follows. \[\begin{align} &\min_{b} {(y^* -X^* \beta)^{\top}(y^* -X^* \beta)} \\ \rightarrow &\min_{b} {(y -X \beta)^{\top} {P^{-1}}^{\top} P^{-1}(y -X \beta)} \\ \rightarrow &\min_{b} {(y -X \beta)^{\top} \Omega^{-1}(y -X \beta)} \end{align}\] GLS estimator is
\[\begin{align} \hat{\beta}_{GLS} &= ({X^*}^{\top} X^*)^{-1} {X^*}^{\top} y^* \\ &= ({X}^{\top} \Omega^{-1} X)^{-1} X^{\top} \Omega^{-1} y \end{align}\] GLS estimator can be also expressed with the OLS estimator
\[\begin{align} \hat{\beta}_{GLS} &= ({X}^{\top} \Omega^{-1} X)^{-1} X^{\top} \Omega^{-1} y \\ &= ({X}^{\top} \Omega^{-1} X)^{-1} X^{\top} \Omega^{-1} (X\beta + u) \\ &= \beta + ({X}^{\top} \Omega^{-1} X)^{-1} X^{\top} \Omega^{-1} u\\ \end{align}\] Finally the mean and variance of \(\hat{\beta}_{GLS}\) are as follows. \[\begin{align} \text{E}[\hat{\beta}_{GLS}X] &= \beta \\ \text{Var}[\hat{\beta}_{GLS}|X] & = \sigma^2 ({X^*}^{\top}{X^*})^{-1} = \sigma^2 (X^{\top}\Omega^{-1}X)^{-1} \end{align}\]
Concluding Remarks
This post derived the GLS estimator which will be used when deriving the Black-Litterman model. \(\blacksquare\)
No comments:
Post a Comment