The paper examines various tests for assessing whether a time series model requires a slope component. We first consider the simple t-test on the mean of first differences and show that it achieves high power against the alternative hypothesis of a stochastic nonstationary slope and also against a purely deterministic slope. The test may be modified, parametrically or nonparametrically, to deal with serial correlation. Using both local limiting power arguments and finite-sample Monte Carlo results, we compare the t-test with the nonparametric tests of Vogelsang (1998, Econometrica 66, 123–148) and with a modified stationarity test. Overall the t-test seems a good choice, particularly if it is implemented by fitting a parametric model to the data. When standardized by the square root of the sample size, the simple t-statistic, with no correction for serial correlation, has a limiting distribution if the slope is stochastic. We investigate whether it is a viable test for the null hypothesis of a stochastic slope and conclude that its value may be limited by an inability to reject a small deterministic slope.The second author thanks the Economic and Social Research Council (ESRC) for support as part of a project on Dynamic Common Factor Models for Regional Time Series, grant L138 25 1008. Support from the Bank of Italy is also gratefully acknowledged. Earlier versions of this paper were presented at the meeting on Frontiers in Time Series held in Olbia, Italy, in June 2005 and at the NSF/NBER Time Series conference in Heidelberg, Germany, in September 2005; we are grateful to several participants for helpful comments. We also thank Peter Phillips, Robert Taylor, Jesus Gonzalo, and a number of other participants at the Unit Root and Co-integration Testing meeting in Faro, Portugal, for helpful comments. We are grateful to Paulo Rodrigues and two referees for their comments.