Clever Geek Handbook
📜 ⬆️ ⬇️

Test chow

The Chow test is a procedure used in econometrics to verify the stability of the parameters of a regression model and the presence of structural shifts in a sample. In fact, the test verifies the heterogeneity of the sample in the context of the regression model.

The true values ​​of the model parameters can theoretically differ for different samples, since the samples can be heterogeneous. In particular, in the analysis of time series , a so-called structural shift can occur when the fundamental characteristics of the system under study change over time. This means that the model before this shift and the model after the shift are generally different. For example, the economy in 1998-1999 and in 2008-2009 underwent structural changes in connection with the crisis, so the parameters of macroeconomic models can be different, before and after these moments.

Content

Structural Change Chow

Let the sample be givenS {\ displaystyle S}   volumen {\ displaystyle n}   which is divided into two subsamplesSone,S2 {\ displaystyle S_ {1}, ~ S_ {2}}   , with volumesnone,n2 {\ displaystyle n_ {1}, ~ n_ {2}}   respectively:n=none+n2 {\ displaystyle n = n_ {1} + n_ {2}}   . For time series, this usually means that a moment of time is suspected of being suspected of a “structural shift," respectively, time series are divided into series before and after this point.

Let the regression model be consideredyt=xtTb+εt {\ displaystyle y_ {t} = x_ {t} ^ {T} b + \ varepsilon _ {t}}   whereb {\ displaystyle b}   - model parameters (their number -k {\ displaystyle k}   ) It is assumed that the subsamples may be heterogeneous. Thus, for two subsamples, there are two models:

{yt=xtTbone+εt,t∈Soneyt=xtTb2+εt,t∈S2{\ displaystyle {\ begin {cases} y_ {t} = x_ {t} ^ {T} b_ {1} + \ varepsilon _ {t} ~, ~ t \ in S_ {1} \\ y_ {t} = x_ {t} ^ {T} b_ {2} + \ varepsilon _ {t} ~, ~ t \ in S_ {2} \ end {cases}}}  

These two models can be represented as one model using the subsample indicator.d {\ displaystyle d}   :

dt={one,t∈Sone0,t∈S2{\ displaystyle d_ {t} = {\ begin {cases} 1 ~, ~ t \ in S_ {1} \\ 0 ~, ~ t \ in S_ {2} \ end {cases}}}  

Using this variable, the following model is formulated:

yt=xtT(dtbone+(one-dt)b2)+εt=dtxtTbone+(one-dt)xtTb2+εt=zoneTbone+z2Tb2+εt{\ displaystyle y_ {t} = x_ {t} ^ {T} (d_ {t} b_ {1} + (1-d_ {t}) b_ {2}) + \ varepsilon _ {t} = d_ {t } x_ {t} ^ {T} b_ {1} + (1-d_ {t}) x_ {t} ^ {T} b_ {2} + \ varepsilon _ {t} = z_ {1} ^ {T} b_ {1} + z_ {2} ^ {T} b_ {2} + \ varepsilon _ {t}}   -

“Long model” without restrictions for the entire sample with the number of parameters2k {\ displaystyle 2k}   . If a restriction is imposed in this modelH0:bone=b2 {\ displaystyle H_ {0}: ~ b_ {1} = b_ {2}}   then we get the original modelyt=xtTb+εt {\ displaystyle y_ {t} = x_ {t} ^ {T} b + \ varepsilon _ {t}}   withk {\ displaystyle k}   parameters also for the entire sample. This is a “short model” - a model with linear restrictions on the parameters of a long model.

Then the test procedure can be reduced to checking this linear constraint. For normally distributed random errors, the standard F-test is used to checkk {\ displaystyle k}   linear constraints. The statistics of this test is based on the well-known principle:

F=(RSSS-RSSL)/kRSSL/(n-kL)=(RSS-RSSone-RSS2)/k(RSSone+RSS2)/(n-2k)∼F(k,n-2k){\ displaystyle F = {\ frac {(RSS_ {S} -RSS_ {L}) / k} {RSS_ {L} / (n-k_ {L})}} = {\ frac {(RSS-RSS_ {1 } -RSS_ {2}) / k} {(RSS_ {1} + RSS_ {2}) / (n-2k)}} ~ \ sim ~ F (k, n-2k)}  

Accordingly, if the value of these statistics is more critical at a given level of significance, then the restriction hypothesis is rejected in favor of a long model, that is, the samples are recognized to be heterogeneous and two different models for the samples need to be built. Otherwise, the sample is homogeneous (model parameters are stable) and it is possible to build a common model for the sample.

In addition to the F-test, other tests can be used to test the constraint hypothesis, in particular the LR test . This is especially true for the more general case, when not two subsamples are distinguished, but several. If the number of subsamples ism {\ displaystyle m}   , then the corresponding LR statistics will have a distributionχ2((m-one)k) {\ displaystyle \ chi ^ {2} ((m-1) k)}   .

Note

The test assumes that only the parameters of the linear model can be different in the samples, but not the distribution parameters of the random error. In particular, the same variance of the random error in both subsamples is assumed. In the general case, however, this may not be so. In this case , the Wald test with statistics is used:

W=(b^one-b^2)T(V^one+V^2)-one(b^one-b^2)→dχ2(k){\ displaystyle W = ({\ hat {b}} _ {1} - {\ hat {b}} _ {2}) ^ {T} ({\ hat {V}} _ {1} + {\ hat {V}} _ {2}) ^ {- 1} ({\ hat {b}} _ {1} - {\ hat {b}} _ {2}) {\ xrightarrow {d}} \ chi ^ { 2} (k)}   ,

Whereb^one,V^one,b^2,V^2 {\ displaystyle {\ hat {b}} _ {1}, {\ hat {V}} _ {1}, {\ hat {b}} _ {2}, {\ hat {V}} _ {2} }   - estimates of parameters and estimates of their covariance matrix in the first and second subsamples, respectively.

Chow Prediction Test

A slightly different approach is taken here. A model is constructed for one of the subsamples, and based on the constructed model, a dependent variable is predicted for the second subsample. The greater the difference between the predicted and actual values ​​of the explained variable in the second sample, the greater the difference between the subsamples. The corresponding F statistics are:

F=(RSS-RSSone)/n2RSSone/( n one - k ) ∼ F ( n 2 , n one - k ){\ displaystyle F = {\ frac {(RSS-RSS_ {1}) / n_ {2}} {RSS_ {1} / (n_ {1} -k)}} ~ \ sim ~ F (n_ {2}, n_ {1} -k)}   .

In this case, one can also use LR statistics with an asymptotic distributionχ2(n2) {\ displaystyle \ chi ^ {2} (n_ {2})}   .

See also

  • CUSUM test

Literature

  • Chow, Gregory C. Tests of Equality Between Sets of Coefficients in Two Linear Regressions. - 1960. - Vol. 28. - P. 591-605. - DOI : 10.2307 / 1910133 .
  • Doran, Howard E. Applied Regression Analysis in Econometrics. - CRC Press, 1989. - P. 146. - ISBN 0-8247-8049-3 .
  • Dougherty, Christopher. Introduction to Econometrics. - Oxford University Press, 2007 .-- P. 194. - ISBN 0-19-928096-7 .
  • Kmenta, Jan. Elements of Econometrics. - Second. - New York: Macmillan, 1986. - P. 412-423. - ISBN 0-472-10886-7 .
  • Wooldridge, Jeffrey M. Introduction to Econometrics: A Modern Approach. - Fourth. - Mason: South-Western, 2009 .-- P. 243–246. - ISBN 978-0-324-66054-8 .
Source - https://ru.wikipedia.org/w/index.php?title=Chow_Test&oldid=88409275


More articles:

  • International Linguistic School Olympiad
  • Kuya (Nenets Autonomous Area)
  • Nerve Plate
  • 116th Tank Brigade
  • Sudravsky, Vladimir Ksaverievich
  • Captive Jacques
  • Mennea, Pietro
  • Teslinov, Andrey Georgievich
  • Metallinvestbank
  • Acanthoderes obscurior

All articles

Clever Geek | 2019