Misplaced Pages

Sure

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In econometrics , the seemingly unrelated regressions ( SUR ) or seemingly unrelated regression equations ( SURE ) model, proposed by Arnold Zellner in (1962), is a generalization of a linear regression model that consists of several regression equations, each having its own dependent variable and potentially different sets of exogenous explanatory variables. Each equation is a valid linear regression on its own and can be estimated separately, which is why the system is called seemingly unrelated , although some authors suggest that the term seemingly related would be more appropriate, since the error terms are assumed to be correlated across the equations.

#811188

39-526: (Redirected from SURE ) [REDACTED] Look up sure in Wiktionary, the free dictionary. Sure may refer to: Seemingly unrelated regressions Series of Unsurprising Results in Economics (SURE), an economics academic journal Sure, as probability, see certainty Sure (brand) , a brand of antiperspirant deodorant Sure (company) ,

78-490: A computationally costly non-linear optimization problem even for the simplest system of linear equations . This situation prompted the development, spearheaded by the Cowles Commission in the 1940s and 1950s, of various techniques that estimate each equation in the model seriatim, most notably limited information maximum likelihood and two-stage least squares . Suppose there are m regression equations of

117-416: A k i -dimensional vector of regressors x ir . If we stack observations corresponding to the i -th equation into R -dimensional vectors and matrices, then the model can be written in vector form as where y i and ε i are R ×1 vectors, X i is a R × k i matrix, and β i is a k i ×1 vector. Finally, if we stack these m vector equations on top of each other,

156-490: A telecommunications company operating in British Crown Dependencies and Overseas Territories Stein's unbiased risk estimate (SURE), in estimation theory The river Sauer In music [ edit ] "Sure" (Every Little Thing song) , from the album Eternity "Sure" (Take That song) , from the album Nobody Else See also [ edit ] Shure Topics referred to by

195-636: A combination of two-stage least squares (2SLS) with SUR. Across fields and disciplines simultaneous equation models are applied to various observational phenomena. These equations are applied when phenomena are assumed to be reciprocally causal. The classic example is supply and demand in economics . In other disciplines there are examples such as candidate evaluations and party identification or public opinion and social policy in political science ; road investment and travel demand in geography; and educational attainment and parenthood entry in sociology or demography . The simultaneous equation model requires

234-401: A theory of reciprocal causality that includes special features if the causal effects are to be estimated as simultaneous feedback as opposed to one-sided 'blocks' of an equation where a researcher is interested in the causal effect of X on Y while holding the causal effect of Y on X constant, or when the researcher knows the exact amount of time it takes for each causal effect to take place, i.e.,

273-421: A type of statistical model in which the dependent variables are functions of other dependent variables, rather than just independent variables. This means some of the explanatory variables are jointly determined with the dependent variable, which in economics usually is the consequence of some underlying equilibrium mechanism . Take the typical supply and demand model: whilst typically one would determine

312-415: Is unbiased in small samples assuming the error terms ε ir have symmetric distribution; in large samples it is consistent and asymptotically normal with limiting distribution Other estimation techniques besides FGLS were suggested for SUR model: the maximum likelihood (ML) method under the assumption that the errors are normally distributed; the iterative generalized least squares (IGLS), where

351-491: Is a T×k i matrix of exogenous regressors, and Y −i is a T×n i matrix of endogenous regressors on the right-hand side of the i equation. Finally, we can move all endogenous variables to the left-hand side and write the m equations jointly in vector form as This representation is known as the structural form . In this equation Y = [ y 1 y 2 ... y m ] is the T×m matrix of dependent variables. Each of

390-402: Is a k i -columned submatrix of X . Matrix Β has size k×m , and each of its columns consists of the components of vectors β i and zeros, depending on which of the regressors from X were included or excluded from X i . Finally, U = [ u 1 u 2 ... u m ] is a T×m matrix of the error terms. Postmultiplying the structural equation by Γ , the system can be written in

429-494: Is also assumed to be non-degenerate. Secondly, error terms are assumed to be serially independent and identically distributed . That is, if the t row of matrix U is denoted by u ( t ) , then the sequence of vectors { u ( t ) } should be iid, with zero mean and some covariance matrix Σ (which is unknown). In particular, this implies that E[ U ] = 0 , and E[ U′U ] = T  Σ . Lastly, assumptions are required for identification. The identification conditions require that

SECTION 10

#1732790461812

468-434: Is also possible using cross equation restrictions. To illustrate how cross equation restrictions can be used for identification, consider the following example from Wooldridge where z's are uncorrelated with u's and y's are endogenous variables. Without further restrictions, the first equation is not identified because there is no excluded exogenous variable. The second equation is just identified if δ 13 ≠0 , which

507-491: Is assumed to be true for the rest of discussion. Now we impose the cross equation restriction of δ 12 = δ 22 . Since the second equation is identified, we can treat δ 12 as known for the purpose of identification. Then, the first equation becomes: Then, we can use ( z 1 , z 2 , z 3 ) as instruments to estimate the coefficients in the above equation since there are one endogenous variable ( y 2 ) and one excluded exogenous variable ( z 2 ) on

546-453: Is different from Wikidata All article disambiguation pages All disambiguation pages Seemingly unrelated regressions The model can be estimated equation-by-equation using standard ordinary least squares (OLS). Such estimates are consistent , however generally not as efficient as the SUR method, which amounts to feasible generalized least squares with a specific form of

585-431: Is more suitable for prediction but not inference. Firstly, the rank of the matrix X of exogenous regressors must be equal to k , both in finite samples and in the limit as T → ∞ (this later requirement means that in the limit the expression 1 T X ′ X {\displaystyle \scriptstyle {\frac {1}{T}}X'\!X} should converge to a nondegenerate k×k matrix). Matrix Γ

624-438: Is performed on equation-by-equation basis, but every equation includes as additional regressors the residuals from the previously estimated equations in order to account for the cross-equation correlations, the estimation is run iteratively until convergence is achieved. Kmenta and Gilbert (1968) ran a Monte-Carlo study and established that all three methods—IGLS, IOLS and ML—yield numerically equivalent results, they also found that

663-455: Is that the rank of Π i 0 equals n i , where Π i 0 is a ( k − k i )× n i matrix which is obtained from Π by crossing out those columns which correspond to the excluded endogenous variables, and those rows which correspond to the included exogenous variables. In simultaneous equations models, the most common method to achieve identification is by imposing within-equation parameter restrictions. Yet, identification

702-428: Is the individual observation, and we are taking the transpose of the x i r {\displaystyle x_{ir}} column vector. The number of observations R is assumed to be large, so that in the analysis we take R → ∞ {\displaystyle \infty } , whereas the number of equations m remains fixed. Each equation i has a single response variable y ir , and

741-612: Is the smallest solution of the generalized eigenvalue problem , see Theil (1971 , p. 503): The LIML is a special case of the K-class estimators: with: Several estimators belong to this class: The three-stage least squares estimator was introduced by Zellner & Theil (1962) . It can be seen as a special case of multi-equation GMM where the set of instrumental variables is common to all equations. If all regressors are in fact predetermined, then 3SLS reduces to seemingly unrelated regressions (SUR). Thus it may also be seen as

780-467: Is transformed into the reduced form first. Once the coefficients are estimated the model is put back into the structural form. The “limited information” maximum likelihood method was suggested by M. A. Girshick in 1947, and formalized by T. W. Anderson and H. Rubin in 1949. It is used when one is interested in estimating a single structural equation at a time (hence its name of limited information), say for observation i: The structural equations for

819-399: The reduced form as This is already a simple general linear model , and it can be estimated for example by ordinary least squares . Unfortunately, the task of decomposing the estimated matrix Π ^ {\displaystyle \scriptstyle {\hat {\Pi }}} into the individual factors Β and Γ is quite complicated, and therefore the reduced form

SECTION 20

#1732790461812

858-426: The system of linear equations be solvable for the unknown parameters. More specifically, the order condition , a necessary condition for identification, is that for each equation k i + n i ≤ k , which can be phrased as “the number of excluded exogenous variables is greater or equal to the number of included endogenous variables”. The rank condition , a stronger condition which is necessary and sufficient,

897-414: The 2SLS estimator of δ i will be given by where P = X  ( X  ′ X ) X  ′ is the projection matrix onto the linear space spanned by the exogenous regressors X . Indirect least squares is an approach in econometrics where the coefficients in a simultaneous equations model are estimated from the reduced form model using ordinary least squares . For this, the structural system of equations

936-606: The asymptotic distribution of these estimators is the same as the distribution of the FGLS estimator, whereas in small samples neither of the estimators was more superior than the others. Zellner and Ando (2010) developed a direct Monte Carlo method for the Bayesian analysis of SUR model. There are two important cases when the SUR estimates turn out to be equivalent to the equation-by-equation OLS. These cases are: Simultaneous equations model Simultaneous equations models are

975-441: The covariance matrix of the stacked error terms ε will be equal to where I R is the R -dimensional identity matrix and ⊗ denotes the matrix Kronecker product . The SUR model is usually estimated using the feasible generalized least squares (FGLS) method. This is a two-step method where in the first step we run ordinary least squares regression for ( 1 ). The residuals from this regression are used to estimate

1014-439: The elements of matrix Σ {\displaystyle \Sigma } : In the second step we run generalized least squares regression for ( 1 ) using the variance matrix Ω ^ = Σ ^ ⊗ I R {\displaystyle \scriptstyle {\hat {\Omega }}\;=\;{\hat {\Sigma }}\,\otimes \,I_{R}} : This estimator

1053-404: The form where i is the equation number, and t = 1, ..., T is the observation index. In these equations x it is the k i × 1 vector of exogenous variables, y it is the dependent variable, y −i,t is the n i × 1 vector of all other endogenous variables which enter the i equation on the right-hand side, and u it are the error terms. The “− i ” notation indicates that

1092-413: The generalization of the general linear model where the regressors on the right-hand-side are allowed to be different in each equation. The SUR model can be further generalized into the simultaneous equations model , where the right-hand side regressors are allowed to be the endogenous variables as well. Suppose there are m regression equations Here i represents the equation number, r = 1, …, R

1131-477: The length of the causal lags. Instead of lagged effects, simultaneous feedback means estimating the simultaneous and perpetual impact of X and Y on each other. This requires a theory that causal effects are simultaneous in time, or so complex that they appear to behave simultaneously; a common example are the moods of roommates. To estimate simultaneous feedback models a theory of equilibrium is also necessary – that X and Y are in relatively steady states or are part of

1170-555: The matrices Y −i is in fact an n i -columned submatrix of this Y . The m×m matrix Γ, which describes the relation between the dependent variables, has a complicated structure. It has ones on the diagonal, and all other elements of each column i are either the components of the vector −γ i or zeros, depending on which columns of Y were included in the matrix Y −i . The T×k matrix X contains all exogenous regressors from all equations, but without repetitions (that is, matrix X should be of full rank). Thus, each X i

1209-557: The quantity supplied and demanded to be a function of the price set by the market, it is also possible for the reverse to be true, where producers observe the quantity that consumers demand and then set the price. Simultaneity poses challenges for the estimation of the statistical parameters of interest, because the Gauss–Markov assumption of strict exogeneity of the regressors is violated. And while it would be natural to estimate all simultaneous equations at once, this often leads to

Sure - Misplaced Pages Continue

1248-482: The remaining endogenous variables Y −i are not specified, and they are given in their reduced form: Notation in this context is different than for the simple IV case. One has: The explicit formula for the LIML is: where M = I − X  ( X  ′ X ) X  ′ , and λ is the smallest characteristic root of the matrix: where, in a similar way, M i = I − X i  ( X i ′ X i ) X i ′ . In other words, λ

1287-441: The residuals from the second step of FGLS are used to recalculate the matrix Σ ^ {\displaystyle \scriptstyle {\hat {\Sigma }}} , then estimate β ^ {\displaystyle \scriptstyle {\hat {\beta }}} again using GLS, and so on, until convergence is achieved; the iterative ordinary least squares (IOLS) scheme, where estimation

1326-414: The right hand side. Therefore, cross equation restrictions in place of within-equation restrictions can achieve identification. The simplest and the most common estimation method for the simultaneous equations model is the so-called two-stage least squares method, developed independently by Theil (1953) and Basmann (1957) . It is an equation-by-equation technique, where the endogenous regressors on

1365-448: The right-hand side of each equation are being instrumented with the regressors X from all other equations. The method is called “two-stage” because it conducts estimation in two steps: If the i equation in the model is written as where Z i is a T× ( n i  + k i ) matrix of both endogenous and exogenous regressors in the i equation, and δ i is an ( n i  + k i )-dimensional vector of regression coefficients, then

1404-405: The same term [REDACTED] This disambiguation page lists articles associated with the title Sure . If an internal link led you here, you may wish to change the link to point directly to the intended article. Retrieved from " https://en.wikipedia.org/w/index.php?title=Sure&oldid=1254141222 " Category : Disambiguation pages Hidden categories: Short description

1443-407: The system will take the form The assumption of the model is that error terms ε ir are independent across observations, but may have cross-equation correlations within observations. Thus, we assume that E[ ε ir ε is | X ] = 0 whenever r ≠ s , whereas E[ ε ir ε jr | X ] = σ ij . Denoting Σ = [ σ ij ] the m×m skedasticity matrix of each observation,

1482-516: The variance-covariance matrix. Two important cases when SUR is in fact equivalent to OLS are when the error terms are in fact uncorrelated between the equations (so that they are truly unrelated) and when each equation contains exactly the same set of regressors on the right-hand-side. The SUR model can be viewed as either the simplification of the general linear model where certain coefficients in matrix B {\displaystyle \mathrm {B} } are restricted to be equal to zero, or as

1521-414: The vector y −i,t may contain any of the y ’s except for y it (since it is already present on the left-hand side). The regression coefficients β i and γ i are of dimensions k i × 1 and n i × 1 correspondingly. Vertically stacking the T observations corresponding to the i equation, we can write each equation in vector form as where y i and u i are T× 1 vectors, X i

#811188