In econometrics , the seemingly unrelated regressions ( SUR ) or seemingly unrelated regression equations ( SURE ) model, proposed by Arnold Zellner in (1962), is a generalization of a linear regression model that consists of several regression equations, each having its own dependent variable and potentially different sets of exogenous explanatory variables. Each equation is a valid linear regression on its own and can be estimated separately, which is why the system is called seemingly unrelated , although some authors suggest that the term seemingly related would be more appropriate, since the error terms are assumed to be correlated across the equations.
23-841: [REDACTED] Look up sure in Wiktionary, the free dictionary. Sure may refer to: Seemingly unrelated regressions Series of Unsurprising Results in Economics (SURE), an economics academic journal Sure, as probability, see certainty Sure (brand) , a brand of antiperspirant deodorant Sure (company) , a telecommunications company operating in British Crown Dependencies and Overseas Territories Stein's unbiased risk estimate (SURE), in estimation theory The river Sauer In music [ edit ] "Sure" (Every Little Thing song) , from
46-416: A k i -dimensional vector of regressors x ir . If we stack observations corresponding to the i -th equation into R -dimensional vectors and matrices, then the model can be written in vector form as where y i and ε i are R ×1 vectors, X i is a R × k i matrix, and β i is a k i ×1 vector. Finally, if we stack these m vector equations on top of each other,
69-415: Is unbiased in small samples assuming the error terms ε ir have symmetric distribution; in large samples it is consistent and asymptotically normal with limiting distribution Other estimation techniques besides FGLS were suggested for SUR model: the maximum likelihood (ML) method under the assumption that the errors are normally distributed; the iterative generalized least squares (IGLS), where
92-596: Is 1 and 0 elsewhere. The determinant of the identity matrix is 1, and its trace is n {\displaystyle n} . The identity matrix is the only idempotent matrix with non-zero determinant. That is, it is the only matrix such that: The principal square root of an identity matrix is itself, and this is its only positive-definite square root. However, every identity matrix with at least two rows and columns has an infinitude of symmetric square roots. The rank of an identity matrix I n {\displaystyle I_{n}} equals
115-1521: Is analogous to multiplying by the number 1. The identity matrix is often denoted by I n {\displaystyle I_{n}} , or simply by I {\displaystyle I} if the size is immaterial or can be trivially determined by the context. I 1 = [ 1 ] , I 2 = [ 1 0 0 1 ] , I 3 = [ 1 0 0 0 1 0 0 0 1 ] , … , I n = [ 1 0 0 ⋯ 0 0 1 0 ⋯ 0 0 0 1 ⋯ 0 ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 ⋯ 1 ] . {\displaystyle I_{1}={\begin{bmatrix}1\end{bmatrix}},\ I_{2}={\begin{bmatrix}1&0\\0&1\end{bmatrix}},\ I_{3}={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}},\ \dots ,\ I_{n}={\begin{bmatrix}1&0&0&\cdots &0\\0&1&0&\cdots &0\\0&0&1&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\cdots &1\end{bmatrix}}.} The term unit matrix has also been widely used, but
138-453: Is different from Wikidata All article disambiguation pages All disambiguation pages Seemingly unrelated regressions The model can be estimated equation-by-equation using standard ordinary least squares (OLS). Such estimates are consistent , however generally not as efficient as the SUR method, which amounts to feasible generalized least squares with a specific form of
161-438: Is performed on equation-by-equation basis, but every equation includes as additional regressors the residuals from the previously estimated equations in order to account for the cross-equation correlations, the estimation is run iteratively until convergence is achieved. Kmenta and Gilbert (1968) ran a Monte-Carlo study and established that all three methods—IGLS, IOLS and ML—yield numerically equivalent results, they also found that
184-428: Is the individual observation, and we are taking the transpose of the x i r {\displaystyle x_{ir}} column vector. The number of observations R is assumed to be large, so that in the analysis we take R → ∞ {\displaystyle \infty } , whereas the number of equations m remains fixed. Each equation i has a single response variable y ir , and
207-564: The Kronecker delta notation: ( I n ) i j = δ i j . {\displaystyle (I_{n})_{ij}=\delta _{ij}.} When A {\displaystyle A} is an m × n {\displaystyle m\times n} matrix, it is a property of matrix multiplication that I m A = A I n = A . {\displaystyle I_{m}A=AI_{n}=A.} In particular,
230-406: The identity matrix of size n {\displaystyle n} is the n × n {\displaystyle n\times n} square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation , the object remains unchanged by the transformation. In other contexts, it
253-553: The album Eternity "Sure" (Take That song) , from the album Nobody Else See also [ edit ] Shure Topics referred to by the same term [REDACTED] This disambiguation page lists articles associated with the title Sure . If an internal link led you here, you may wish to change the link to point directly to the intended article. Retrieved from " https://en.wikipedia.org/w/index.php?title=Sure&oldid=1254141222 " Category : Disambiguation pages Hidden categories: Short description
SECTION 10
#1732780825230276-578: The asymptotic distribution of these estimators is the same as the distribution of the FGLS estimator, whereas in small samples neither of the estimators was more superior than the others. Zellner and Ando (2010) developed a direct Monte Carlo method for the Bayesian analysis of SUR model. There are two important cases when the SUR estimates turn out to be equivalent to the equation-by-equation OLS. These cases are: Identity matrix In linear algebra ,
299-441: The covariance matrix of the stacked error terms ε will be equal to where I R is the R -dimensional identity matrix and ⊗ denotes the matrix Kronecker product . The SUR model is usually estimated using the feasible generalized least squares (FGLS) method. This is a two-step method where in the first step we run ordinary least squares regression for ( 1 ). The residuals from this regression are used to estimate
322-439: The elements of matrix Σ {\displaystyle \Sigma } : In the second step we run generalized least squares regression for ( 1 ) using the variance matrix Ω ^ = Σ ^ ⊗ I R {\displaystyle \scriptstyle {\hat {\Omega }}\;=\;{\hat {\Sigma }}\,\otimes \,I_{R}} : This estimator
345-413: The generalization of the general linear model where the regressors on the right-hand-side are allowed to be different in each equation. The SUR model can be further generalized into the simultaneous equations model , where the right-hand side regressors are allowed to be the endogenous variables as well. Suppose there are m regression equations Here i represents the equation number, r = 1, …, R
368-409: The identity matrix I n {\displaystyle I_{n}} represents the identity function , for whatever basis was used in this representation. The i {\displaystyle i} th column of an identity matrix is the unit vector e i {\displaystyle e_{i}} , a vector whose i {\displaystyle i} th entry
391-422: The identity matrix serves as the multiplicative identity of the matrix ring of all n × n {\displaystyle n\times n} matrices, and as the identity element of the general linear group G L ( n ) {\displaystyle GL(n)} , which consists of all invertible n × n {\displaystyle n\times n} matrices under
414-514: The identity matrix, standing for "unit matrix" and the German word Einheitsmatrix respectively. In terms of a notation that is sometimes used to concisely describe diagonal matrices , the identity matrix can be written as I n = diag ( 1 , 1 , … , 1 ) . {\displaystyle I_{n}=\operatorname {diag} (1,1,\dots ,1).} The identity matrix can also be written using
437-494: The matrix multiplication operation. In particular, the identity matrix is invertible. It is an involutory matrix , equal to its own inverse. In this group, two square matrices have the identity matrix as their product exactly when they are the inverses of each other. When n × n {\displaystyle n\times n} matrices are used to represent linear transformations from an n {\displaystyle n} -dimensional vector space to itself,
460-441: The residuals from the second step of FGLS are used to recalculate the matrix Σ ^ {\displaystyle \scriptstyle {\hat {\Sigma }}} , then estimate β ^ {\displaystyle \scriptstyle {\hat {\beta }}} again using GLS, and so on, until convergence is achieved; the iterative ordinary least squares (IOLS) scheme, where estimation
483-407: The system will take the form The assumption of the model is that error terms ε ir are independent across observations, but may have cross-equation correlations within observations. Thus, we assume that E[ ε ir ε is | X ] = 0 whenever r ≠ s , whereas E[ ε ir ε jr | X ] = σ ij . Denoting Σ = [ σ ij ] the m×m skedasticity matrix of each observation,
SECTION 20
#1732780825230506-628: The term identity matrix is now standard. The term unit matrix is ambiguous, because it is also used for a matrix of ones and for any unit of the ring of all n × n {\displaystyle n\times n} matrices . In some fields, such as group theory or quantum mechanics , the identity matrix is sometimes denoted by a boldface one, 1 {\displaystyle \mathbf {1} } , or called "id" (short for identity). Less frequently, some mathematics books use U {\displaystyle U} or E {\displaystyle E} to represent
529-516: The variance-covariance matrix. Two important cases when SUR is in fact equivalent to OLS are when the error terms are in fact uncorrelated between the equations (so that they are truly unrelated) and when each equation contains exactly the same set of regressors on the right-hand-side. The SUR model can be viewed as either the simplification of the general linear model where certain coefficients in matrix B {\displaystyle \mathrm {B} } are restricted to be equal to zero, or as
#229770