Equivalance of VAR forecastings
Let \(y_t\) represent a vector of time series, and let \(x_t\) be a vector formed by linear combinations of \(y_t\), such that \(x_t = M y_t\).
1) VAR model of \(y_t\)
1-1) VAR Estimation of \(y_t\)
\[\begin{align} y_t = A + B y_{t-1} + \epsilon_t \end{align}\] 1-2) Forecast \(y_t\) \[\begin{align} y_{t+h} = \hat{A} + \hat{B} y_{t+h-1} \end{align}\] 1-3) Recover \(x_t\) forecasts \[\begin{align} x_{t+h}^{rec} = M y_{t+h} \\ \end{align}\]
2) VAR model for \(x_t (=M \times y_t)\)
2-1) VAR Estimation of \(x_t\) \[\begin{align} x_t = C + D x_{t-1} + \eta_{t} \end{align}\] 2-1) Forecast \(x_t\) directly \[\begin{align} x_{t+h} = \hat{C} + \hat{D} x_{t+h-1} \end{align}\]
3) \(x_{t+h}\) and \(x^{rec}_{t+h} \) are the same
\[\begin{align} x^{rec}_{t+h} &= M y_{t+h} \\ \rightarrow x^{rec}_{t+h} &= M\hat{A} + M\hat{B} M^{-1} x^{rec}_{t+h-1} \\ \rightarrow x^{rec}_{t+h}&= \hat{C} + \hat{D} x^{rec}_{t+h-1} \\ \because x^{rec}_{t+h} &= x_{t+h} \\ \rightarrow x_{t+h} &= \hat{C} + \hat{D} x_{t+h-1} \end{align}\]R code
The following simple example demonstrate the equivalance between \(x_{t+h}\) and \(x^{rec}_{t+h} \).
graphics.off(); rm(list = ls()) library(tsDyn) # data1 data1 <- matrix(rnorm(100*5, mean=0, sd=2), 100, 5) # data2 is linear combinations of data1 data2 <- matrix(NA, 100, 5) data2[,1] <- data1[,1] + 0.5*data1[,2] data2[,2] <- 1.5*data1[,1] + data1[,3] + data1[,4] - 2*data1[,5] data2[,3] <- 2*data1[,4] + 3*data1[,3] data2[,4] <- data1[,4] - 2*data1[,3] + data1[,5] data2[,5] <- data1[,5] - data1[,1] # VAR estimation and forecasting using data1 var_mod <- lineVar(data1, lag=1) fcst1 <- predict(var_mod, n.ahead = 36) # fcst2 is recover from fcst1 like data2 fcst2_rec <- matrix(NA, 36, 5) fcst2_rec[,1] <- fcst1[,1] + 0.5*fcst1[,2] fcst2_rec[,2] <- 1.5*fcst1[,1] + fcst1[,3] + fcst1[,4] - 2*fcst1[,5] fcst2_rec[,3] <- 2*fcst1[,4] + 3*fcst1[,3] fcst2_rec[,4] <- fcst1[,4] - 2*fcst1[,3] + fcst1[,5] fcst2_rec[,5] <- fcst1[,5] - fcst1[,1] # VAR estimation and forecasting using data2 var_mod <- lineVar(data2, lag=1) fcst2_dir <- predict(var_mod, n.ahead = 36) # compare fcst2_rec and fcst2_dir sum(fcst2_rec) sum(fcst2_dir) # differces are nearly zero fcst2_rec - fcst2_dir | cs |
> # differces are nearly zero > round(fcst2_rec - fcst2_dir,8) Var1 Var2 Var3 Var4 Var5 101 0 0 0 0 0 102 0 0 0 0 0 103 0 0 0 0 0 104 0 0 0 0 0 105 0 0 0 0 0 106 0 0 0 0 0 107 0 0 0 0 0 108 0 0 0 0 0 109 0 0 0 0 0 110 0 0 0 0 0 111 0 0 0 0 0 112 0 0 0 0 0 113 0 0 0 0 0 114 0 0 0 0 0 115 0 0 0 0 0 116 0 0 0 0 0 117 0 0 0 0 0 118 0 0 0 0 0 119 0 0 0 0 0 120 0 0 0 0 0 121 0 0 0 0 0 122 0 0 0 0 0 123 0 0 0 0 0 124 0 0 0 0 0 125 0 0 0 0 0 126 0 0 0 0 0 127 0 0 0 0 0 128 0 0 0 0 0 129 0 0 0 0 0 130 0 0 0 0 0 131 0 0 0 0 0 132 0 0 0 0 0 133 0 0 0 0 0 134 0 0 0 0 0 135 0 0 0 0 0 136 0 0 0 0 0 > | cs |
The content presented in this post may seem straightforward, and you are probably already acquainted with it.
Nevertheless, it has been substantiated through practical examination. This is because it is sometimes crucial to verify what we assume to be common knowledge (Look before you leap).
In particular, the subjects addressed here are useful in the context of segmented term structure models. Such models typically involve too many latent factors. However, by applying cross-sectional restrictions, it becomes possible to reduce to a small set of factors. In this case, leveraging these features facilitates more straightforward estimation and prediction processes.
No comments:
Post a Comment