INTRODUCTION TO STATISTICS AND ECONOMETRICS
Error Components Model
The error components model is useful when we wish to pool time-series and cross-section data. For example, we may want to estimate production functions using data collected on the annual inputs and outputs of many firms, of demand functions using data on the quantities and prices collected monthly from many consumers. By pooling time-series and cross - section data, we hope to be able to estimate the parameters of a relationship such as a production function or a demand function more efficiently than by using two sets of data separately. Still, we should not treat time - series and cross-section data homogeneously. At the least, we should try to account for the difference by introducing the specific effects of time and cross-section into the error term of the regression, as follows:
(13.1.32) yu = хг:(р + uu and
(13.1.33) utt = (jLi + Xt + eu, i= 1, 2,------------------- N; t = 1, 2, . . . , T,
where p-i and Xt are the cross-section specific and time-specific components.
In the simplest version of such a model, we assume that the sequence (pi), {A.,}, and {£,,( are i. i.d. random variables with zero mean and are mutually independent with the variances a^, a^, and ay, respectively. In order to find the variance-covariance matrix X of this model, we must first decide how to write (13.1.32) in the form of (13.1.1). In defining the vector y, for example, it is customary to arrange the observations in the following way: y' = (yn, yn,. . . , ylT, у2Ъ у2Ъ . . . , y2T, ■ ■ ■ , УыъУыъ ■ • • , yNT)- If we define X and u similarly, we can write (13.1.32) in the form of
(13.1.1) . To facilitate the derivation of X, we need the following definition.
DEFINITION 13.1.1 Let A = {a^} be а К X L matrix and let В be an M X N matrix. Then the Kronecker product A ® В is a KM X LN matrix defined by
йцВ cigB оцїі
Я2іВ Й22® ^2Z. B
A® В =
o-iпВ ак 2B
Let Jk be the К X К matrix consisting entirely of ones. Then we have
(13.1.34) |
X = сг|;А + oxB + ov |
where A |
= Ідг ® Jr and В = J^ |
(13.1.35) |
^ 1 = (Int ~ 7iA |
where |
(Та 71 = <TiL(o"a + Ta^)"1, 72 = o(crf + Naly1, |
73 |
7і7г(2(Те + Tul + Nvl)(<rl + T<rl + No{)
From the above, p can be estimated by the GLS estimator (13.1.5), or more practically by the FGLS estimator (13.1.11), using the consistent estimators of уъ y2, and y3. Alternatively, we can estimate P by the so-called transformation estimator
(13.1.36) pe = (X'QX)-1X'Qy,
where
(13.1.37) Q = I - - - J-B +
' 50 T N NTJ
This estimator is computationally simpler than the FGLS estimator, because it does not require estimation of the y’s, yet is consistent and has the same asymptotic distribution as FGLS.
Remember that if we arrange the observations in a different way, we need a different formula for X.