R : Diebold-Mariano Test

This post shows how to use Diebold-Mariano test by using multDM R package.



Diebold-Mariano test



The Diebold-Mariano (DM) test is the most commonly used tool to evaluate the significance of differences in forecasting accuracy. It is an asymptotic z-test of the hypothesis that the mean of the loss differential series:
\[\begin{align} d_t = L(\epsilon_{1t}) - L(\epsilon_{2t}) \end{align}\] where the loss (L) corresponds to various measures such as the absolute loss, squared loss, and more

The null hypothesis is that the two forecasts have the same accuracy. The alternative hypothesis is that the two forecasts have different levels of accuracy
\[\begin{align} &H_0 : E(d_t) = 0 \\ &H_1 : E(d_t) \ne 0 \end{align}\] Under the null hypothesis, the DM test statistic follows an asymptotic \(N(0,1)\) distribution. The \(H_0\) of no difference will be rejected if the DM test statistic falls outside the range of \(−z_{\alpha/2}\) to \(z_{\alpha/2}\), that is if \(|DM| > z_{\alpha/2}\).



R code


For example, let's consider the 3-month-ahead forecasts of 3-year yields from two models. Two forecasts appear similar, but some discrepancies emerge in the early part of the sample period.

The forecasts of the NS2 model seem to marginally outperform those of the DNS model. To substantiate this claim, we need to determine its statistical significance. For this purpose, we can employ the Diebold-Mariano test.
I use DM.test() function of multDM R package and also use rmse() function of Metrics R package. You can find more specific information about the input arguments in the code as comments.


#========================================================#
# Quantitative Financial Econometrics & Derivatives 
# ML/DL using R, Python, Tensorflow by Sang-Heon Lee 
#
# https://shleeai.blogspot.com
#-----------------------------------------------------------
# RMSE and DM test
#========================================================#
 
library(Metrics) # rmse
library(multDM)  # DM.test
 
graphics.off(); rm(list = ls())
 
#-----------------------------------------------------------
# 3-year yields and its 3-month-ahead forecasts
#-----------------------------------------------------------
v_true <- c(
    0.37850.32150.53470.71440.65140.82920.6743
    0.62240.60640.83060.72560.72860.91570.8904
    0.80720.89381.05340.99261.12030.98720.9110
    1.11850.78841.03440.90600.94060.97461.0441
    1.02181.09070.93781.06871.24081.35010.9984
    0.94700.90110.95791.06720.73800.79100.9682
    0.92471.04551.43401.50141.49651.52011.5407
    1.48131.45571.57191.52581.45651.64551.7537
    1.92862.00332.31272.42842.39852.63372.5348
    2.61342.75372.68172.86662.91582.82182.4624
    2.42032.50412.22602.23201.89401.71641.8397
    1.43351.58671.51991.62911.61591.29890.8547
    0.32300.27120.20950.19160.13650.16860.1665
    0.21530.20500.17800.20250.34330.39670.3779
    0.33140.46940.36310.41610.54660.78010.8247
    0.98601.40371.63262.48442.89332.72412.9872
    2.83563.43674.23074.40064.11474.2295)
 
v_DNS <- c(
    0.67700.73800.61320.73120.57480.75751.1352
    1.07341.12821.06890.96951.00261.24581.0561
    1.10921.18701.38451.11161.16481.35601.3500
    1.26821.32591.01341.35600.82401.07901.0726
    1.18021.18931.21331.31031.39771.23971.2725
    1.26011.57171.15561.21771.17611.33211.2765
    0.87690.91331.26931.04861.27011.57501.7047
    1.70231.56621.72581.58561.67011.70451.7832
    1.61201.82601.90671.97142.13302.31882.4173
    2.44022.70142.57902.60522.83552.65182.7979
    2.79742.78622.60742.58242.65112.28672.3288
    2.12711.84611.96181.64911.73871.54851.6753
    1.71251.42241.03910.52590.38000.31750.2468
    0.29440.32660.34460.51910.38920.22780.3722
    0.53930.72710.64810.59660.69290.57640.5078
    0.68370.89240.86490.94081.37361.45852.1477
    2.63802.54712.93162.74433.38473.9966)
 
v_NS2 <- c(
    0.38270.45590.38150.39940.33500.60820.7184
    0.66580.81000.71020.65490.62340.79410.6910
    0.68480.78100.75880.69970.76300.80790.7405
    0.87600.82030.71920.90030.57670.87190.7645
    0.98661.06921.06241.11291.25721.31881.4228
    1.56601.64991.14881.22081.19021.22331.3102
    0.95930.97691.12701.11771.28521.67211.7223
    1.68231.67311.77231.72621.48281.59651.5551
    1.52991.82991.90852.03432.09802.29742.4676
    2.44172.60112.55492.56332.69892.65272.8365
    2.90902.87232.47772.41542.51532.22312.2399
    1.91871.70191.80431.41241.56011.49791.6286
    1.63561.28060.84060.33710.30260.18910.1703
    0.11670.13270.13740.20500.19950.15320.1787
    0.30390.34160.34340.30100.46210.37340.4177
    0.50570.71630.76701.03661.39431.57482.5524
    2.98182.68533.51692.47363.51014.2882)
 
x11(width=7, height=4.5)
matplot(cbind(v_true, v_DNS, v_NS2), type="l", col=2:4,
        lty=1, lwd=4, ylab = "Yields (Percent)")
legend("topleft", pch=rep(16,3), cex = 0.9, col=2:4
       legend=c("True""DNS""NS2"))
 
#-----------------------------------------------------------
# RMSE and DM Test
#-----------------------------------------------------------
 
#===========================================================
# DM test arguments
#===========================================================
# f1 : vector of the first forecast
# f2 : vector of the second forecast
# y     : vector of the real values of the modeled time-series
#
#-----------------------------------------------------------
# loss.type    : method to compute the loss function, 
#             if not specified loss.type="SE" is use
#-----------------------------------------------------------
# loss.type="SE"   : squared errors, 
# loss.type="AE"   : absolute errors, 
# loss.type="SPE"  : squared proportional error 
#                   (useful if errors are heteroskedastic), 
# loss.type="ASE"  : absolute scaled error, 
#
# if loss.type will be specified as some numeric, then 
# the function of type 
#
# exp(loss.type*errors)-1-loss.type*errors 
#
# will be used (useful when it is more costly 
# to underpredict y than to overpredict), 
#
# if not specified loss.type="SE" is use
#
#-----------------------------------------------------------
# h    : the forecast h-steps ahead are evaluated, 
#     if not specified h=1 is used
#-----------------------------------------------------------
# c    : if Harvey-Leybourne-Newbold correction 
#     for small samples should be used, 
#     if not specified c=FALSE is used
#-----------------------------------------------------------
# H1 : alternative hypothesis, 
#      if not specified H1="same" is taken
#-----------------------------------------------------------
# H1="same" for ”both forecasts have different accuracy”, 
# H1="more" for ”1st fcst is more accurate than 2nd fcst”, 
# H1="less" for ”1st fcst is less accurate than 2nd fcst”, 
#-----------------------------------------------------------
 
# input
v_true <- v_true; v_pred1 <- v_DNS; v_pred2 <- v_NS2
 
# RMSE in bps
rmse1 <- rmse(v_true, v_pred1)*100
rmse2 <- rmse(v_true, v_pred2)*100
 
# DM test
dm <- DM.test(f1 = v_pred1, f2 = v_pred2, y = v_true,
              loss="SE", h=3, c=FALSE, H1="same")
 
# output
cbind(RMSE_DNS = round(rmse1,2), 
      RMSE_NS2 = round(rmse2,2), 
      DM_stat  = round(dm$statistic,4),
      DM_pval  = round(dm$p.value,4))
 
cs


From the following DM test result, we can conclude tha according to the Diebold-Mariano test of equal predictive ability, the null hypothesis is rejected in favor of forecasts of the NS2 model. In other words, The forecasts of the NS2 model are more accurate than the forecasts of the DNS model.

However, we cannot conclude that the NS2 model itself is more accurate than the DNS model, as the authors stress that the DM test was not intended for compairng models.

-----------------------------------------------------
              RMSE_DNS  RMSE_NS2  DM_stat  DM_pval
-----------------------------------------------------
   statistic   47.82     44.17     2.1027   0.0355
-----------------------------------------------------
 
cs


Reference

Diebold, F.X. and R.S. Mariano (1995) Comparing predictive accuracy. Journal of Business and Economic Statistics 13, 253-263. \(\blacksquare\)


No comments:

Post a Comment