Econometrica: Sep, 1988, Volume 56, Issue 5
Approximate Power Functions for Some Robust Tests of Regression Coefficients
https://www.jstor.org/stable/1911356
p. 997-1019
Thomas J. Rothenberg
Asymptotically valid procedures are available for testing hypotheses on the slope coefficients in a linear regression model where the error covariance matrix @s depends on an unknown parameter vector @Q. Two situations can be distinguished. In one case, @Q has low dimension and can be well estimated from the data; tests on the slope coefficients are based on generalized least squares using the estimated @Q. In the second case, the parameter vector @Q has high dimension and cannot be well estimated; tests are based on the ordinary least squares slope coefficients using a robust estimate of their covariance matrix. When the null hypothesis consists of a single constraint, the test statistics in both cases are ratios of estimated slope coefficients to their estimated standard errors and are asymptotically normal as the sample size n tends to infinity. In the present paper, Edgeworth approximations with error @?(n^-^1) are developed for the distribution functions of these test statistics under the assumption that the errors are normal. In both cases, adjustments to the asymptotic critical values are found to insure that the tests have correct size to a second order of approximation. Approximate local power functions are derived for the size-adjusted tests and the power loss due to the estimation of @s is calculated. The second-order adjustments to the asymptotic distributions are relatively simple, but depend on the particular functional form for @s(@Q). Two examples, one involving heteroscedasticity and the other autocorrelation, are worked out in detail. These examples suggest that the size and power correction terms can be large even in very simple models where asymptotic theory might be expected to work well. It appears that the null rejection probabilities of the commonly proposed robust regression tests are often considerably greater than their nominal level. Moreover, the cost of not k@s can sometimes be substantial.