We consider robust methods for estimation and unit root [UR] testing in autoregressions with innovational outliers whose number, size and location can be random and unknown. We show that in this setting standard inference based on OLS estimation of an augumented Dickey-Fuller [ADF] regression may not be reliable, since (i) clusters of outliers may lead to inconsistent estimation of the autoregressive parameters, and (ii) large outliers induce a jump component in the asymptotic null distribution of UR test statistics. In the benchmark case of known outlier location, we discuss why the augmentation of the ADF regression with appropriate dummy variables not only ensures consistent parameter estimation, but also gives rise to UR tests with significant power gains, growing with the number and the size of the outliers. In the case of unknown outlier location, the dummy based approach is compared with a robust, mixed Gaussian, Quasi Maximum Likelihood [QML] inference approach, novel in this context. It is proved that, when the ordinary innovations are Gaussian, the QML and the dummy based approach are asymptotically equivalent, yielding UR tests with the same asymptotic size and power. Moreover, the outlier dates can be consistently estimated as a by-product of QML. When the innovations display tails fatter than Gaussian, the QML approach seems to ensure further power gains over the dummy based method. A number of Monte Carlo simulations show that the QML ADF-type t-test, in conjunction with standard Dickey-Fuller critical values, yields the best combination of finite sample size and power.
Cavaliere G., Georgiev I. (2007). Robust inference in autoregressions with multiple outliers. S.N. : s.n.
Robust inference in autoregressions with multiple outliers
CAVALIERE, GIUSEPPE;GEORGIEV, ILIYAN VLADIMIROV
2007
Abstract
We consider robust methods for estimation and unit root [UR] testing in autoregressions with innovational outliers whose number, size and location can be random and unknown. We show that in this setting standard inference based on OLS estimation of an augumented Dickey-Fuller [ADF] regression may not be reliable, since (i) clusters of outliers may lead to inconsistent estimation of the autoregressive parameters, and (ii) large outliers induce a jump component in the asymptotic null distribution of UR test statistics. In the benchmark case of known outlier location, we discuss why the augmentation of the ADF regression with appropriate dummy variables not only ensures consistent parameter estimation, but also gives rise to UR tests with significant power gains, growing with the number and the size of the outliers. In the case of unknown outlier location, the dummy based approach is compared with a robust, mixed Gaussian, Quasi Maximum Likelihood [QML] inference approach, novel in this context. It is proved that, when the ordinary innovations are Gaussian, the QML and the dummy based approach are asymptotically equivalent, yielding UR tests with the same asymptotic size and power. Moreover, the outlier dates can be consistently estimated as a by-product of QML. When the innovations display tails fatter than Gaussian, the QML approach seems to ensure further power gains over the dummy based method. A number of Monte Carlo simulations show that the QML ADF-type t-test, in conjunction with standard Dickey-Fuller critical values, yields the best combination of finite sample size and power.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.