Published online by Cambridge University Press: 09 October 2014
The limit distribution of conventional test statistics for predictability may depend on the degree of persistence of the predictors. Therefore, diverging results and conclusions may arise because of the different asymptotic theories adopted. Using differencing transformations, we introduce a new class of estimators and test statistics for predictive regression models with Gaussian limit distribution that is instead insensitive to the degree of persistence of the predictors. This desirable feature allows to construct Gaussian confidence intervals for the parameter of interest in stationary, nonstationary, and even locally explosive settings. Besides the limit distribution, we also study the efficiency and the rate of convergence of our new class of estimators. We show that the rate of convergence is $\sqrt n $ in stationary cases, while it can be arbitrarily close to n in nonstationary settings, still preserving the Gaussian limit distribution. Monte Carlo simulations confirm the high reliability and accuracy of our test statistics.