Clever Geek Handbook
📜 ⬆️ ⬇️

Newey West Standard Error

Heteroskedasticity and Autocorrelation consistent standard errors ( HAC se - Heteroskedasticity and Autocorrelation consistent standard errors ) - an econometric estimate of the covariance matrix of OLS estimates (in particular, standard errors) of the parameters of the linear regression model, alternative to the standard (classical) estimate, which is consistent with heteroskedasticity and autocorrelation of random model errors (in contrast to the classical estimate and standard errors in the form of Wa yta ).

Content

  • 1 Essence and Formula
    • 1.1 Note
  • 2 See also
  • 3 Literature

Essence and Formula

The true covariance matrix of the OLS estimates of the parameters of the linear model in the general case is equal to:

V(b^OLS)=(XTX)-one(XTVX)(XTX)-one{\ displaystyle V ({\ hat {b}} _ {OLS}) = (X ^ {T} X) ^ {- 1} (X ^ {T} VX) (X ^ {T} X) ^ {- one}} {\displaystyle V({\hat {b}}_{OLS})=(X^{T}X)^{-1}(X^{T}VX)(X^{T}X)^{-1}}

WhereV {\ displaystyle V} V - covariance matrix of random errors. In case there is no heteroskedasticity and autocorrelation (i.e., whenV=σ2I {\ displaystyle V = \ sigma ^ {2} I} {\displaystyle V=\sigma ^{2}I} ) the formula is simplified

V^(b^OLS)=σ2(XTX)-one{\ displaystyle {\ hat {V}} ({\ hat {b}} _ {OLS}) = {\ sigma} ^ {2} (X ^ {T} X) ^ {- 1}} {\displaystyle {\hat {V}}({\hat {b}}_{OLS})={\sigma }^{2}(X^{T}X)^{-1}}

Therefore, to estimate the covariance matrix in the classical case, it suffices to use the estimate of a single parameter - the variance of random errors:σ^2=RSS/(n-k) {\ displaystyle {\ hat {\ sigma}} ^ {2} = RSS / (nk)} {\displaystyle {\hat {\sigma }}^{2}=RSS/(n-k)} , which, as can be proved, is an unbiased and consistent assessment. In the presence of heteroskedasticity, but without autocorrelation, the matrix V is diagonal and, instead of these diagonal elements, squares of residuals can be used to obtain consistent estimates ( standard errors in the form of White ). In the general case, in addition to heteroskedasticity, autocorrelation of some order can also take place. Therefore, in addition to the diagonal elements, it is necessary to evaluate the off-diagonal elements that are separated from the diagonal by L. Newey and West (Newey, West, 1987) showed that the following estimates are consistent:

V^(b^OLS)=(XTX)-one(∑t=onenet2xtxtT+∑j=oneL∑t=j+onenwjetet-j(xtxt-jT+xt-jxtT))(XTX)-one{\ displaystyle {\ hat {V}} ({\ hat {b}} _ {OLS}) = (X ^ {T} X) ^ {- 1} (\ sum _ {t = 1} ^ {n} e_ {t} ^ {2} x_ {t} x_ {t} ^ {T} + \ sum _ {j = 1} ^ {L} \ sum _ {t = j + 1} ^ {n} w_ {j } e_ {t} e_ {tj} (x_ {t} x_ {tj} ^ {T} + x_ {tj} x_ {t} ^ {T})) (X ^ {T} X) ^ {- 1} } {\displaystyle {\hat {V}}({\hat {b}}_{OLS})=(X^{T}X)^{-1}(\sum _{t=1}^{n}e_{t}^{2}x_{t}x_{t}^{T}+\sum _{j=1}^{L}\sum _{t=j+1}^{n}w_{j}e_{t}e_{t-j}(x_{t}x_{t-j}^{T}+x_{t-j}x_{t}^{T}))(X^{T}X)^{-1}}

This estimate, as can be seen from the formula, depends on the selected "window width" L and weight coefficientswj {\ displaystyle w_ {j}} {\displaystyle w_{j}} . The simplest option for choosing weights is to choose them equal to one. However, in this case, the necessary positive definiteness of the matrix is ​​not provided. Second Option - Bartlett Weightswj=one-j/(L+one) {\ displaystyle w_ {j} = 1-j / (L + 1)} {\displaystyle w_{j}=1-j/(L+1)} . However, the most preferred option is considered to be the Parzen weight:

wj={one-6(jL+one)2+6(jL+one)3,j⩽(L+one)/22(one-jL+one)2,j>(L+one)/2{\ displaystyle w_ {j} = {\ begin {cases} 1-6 ({\ frac {j} {L + 1}}) ^ {2} +6 ({\ frac {j} {L + 1}} ) ^ {3} ~, ~~ j \ leqslant (L + 1) / 2 \\ 2 (1 - {\ frac {j} {L + 1}}) ^ {2} ~, ~~ j> (L +1) / 2 \ end {cases}}}  

There is also the problem of choosing the “window width” L. The following rating is usually recommended.L=[four(n/one hundred)2/9] {\ displaystyle L = [4 (n / 100) ^ {2/9}]}  

Note

Sometimes the given formula for estimating the covariance matrix is ​​adjusted by a factorn/(n-k) {\ displaystyle n / (nk)}   . Such a correction theoretically allows obtaining more accurate estimates on small samples. At the same time, in large samples (asymptotically), these estimates are equivalent.

See also

  • White Error Standard Errors
  • Generalized Least Squares Method

Literature

  • Magnus J.R., Katyshev P.K., Peresetsky A.A. Econometrics. - M .: Case, 2004 .-- 576 p.
  • William H. Greene. Econometric analysis. - New York: Pearson Education, Inc., 2003 .-- 1026 p.
Source - https://ru.wikipedia.org/w/index.php?title=Standard_ Errors in New_Usta_form&oldid = 85992728


More articles:

  • Gretsov, Yuri Vladimirovich
  • Standard White Error
  • Ludogorets
  • Godies
  • Werth, Gyach de
  • Balitsky, Leonid Markovich
  • Karl Marx (Timashevsky District)
  • Red-necked Spotted Dove
  • Krasnoarmeysky (Novokorsunsky rural settlement)
  • Balyaev, Alexey Andreevich

All articles

Clever Geek | 2019