Misplaced Pages

Gauss–Newton algorithm

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function . Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations . It has the advantage that second derivatives, which can be challenging to compute, are not required.

#721278

78-1033: Non-linear least squares problems arise, for instance, in non-linear regression , where parameters in a model are sought such that the model is in good agreement with available observations. The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton , and first appeared in Gauss's 1809 work Theoria motus corporum coelestium in sectionibus conicis solem ambientum . Given m {\displaystyle m} functions r = ( r 1 , … , r m ) {\displaystyle {\textbf {r}}=(r_{1},\ldots ,r_{m})} (often called residuals) of n {\displaystyle n} variables β = ( β 1 , … β n ) , {\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\ldots \beta _{n}),} with m ≥ n , {\displaystyle m\geq n,}

156-771: A 1 b 1 a 2 b 1 a 3 b 2 a 1 b 2 a 2 b 2 a 3 b 3 a 1 b 3 a 2 b 3 a 3 ] . {\displaystyle \mathbf {b} \otimes \mathbf {a} =\mathbf {b} \mathbf {a} ^{\intercal }={\begin{bmatrix}b_{1}\\b_{2}\\b_{3}\end{bmatrix}}{\begin{bmatrix}a_{1}&a_{2}&a_{3}\end{bmatrix}}={\begin{bmatrix}b_{1}a_{1}&b_{1}a_{2}&b_{1}a_{3}\\b_{2}a_{1}&b_{2}a_{2}&b_{2}a_{3}\\b_{3}a_{1}&b_{3}a_{2}&b_{3}a_{3}\\\end{bmatrix}}\,.} An n × n matrix M can represent

234-401: A 2 a 3 ] [ b 1 b 2 b 3 ] = [ a 1 b 1 a 1 b 2 a 1 b 3 a 2 b 1 a 2 b 2 a 2 b 3

312-507: A 3 b 1 a 3 b 2 a 3 b 3 ] , {\displaystyle \mathbf {a} \otimes \mathbf {b} =\mathbf {a} \mathbf {b} ^{\intercal }={\begin{bmatrix}a_{1}\\a_{2}\\a_{3}\end{bmatrix}}{\begin{bmatrix}b_{1}&b_{2}&b_{3}\end{bmatrix}}={\begin{bmatrix}a_{1}b_{1}&a_{1}b_{2}&a_{1}b_{3}\\a_{2}b_{1}&a_{2}b_{2}&a_{2}b_{3}\\a_{3}b_{1}&a_{3}b_{2}&a_{3}b_{3}\\\end{bmatrix}}\,,} which

390-407: A n b n , {\displaystyle \mathbf {a} \cdot \mathbf {b} =\mathbf {a} ^{\intercal }\mathbf {b} ={\begin{bmatrix}a_{1}&\cdots &a_{n}\end{bmatrix}}{\begin{bmatrix}b_{1}\\\vdots \\b_{n}\end{bmatrix}}=a_{1}b_{1}+\cdots +a_{n}b_{n}\,,} By the symmetry of the dot product, the dot product of two column vectors a , b is also equal to the matrix product of

468-416: A and b and with multiplicative error term U . If we take the logarithm of both sides, this becomes where u = ln( U ), suggesting estimation of the unknown parameters by a linear regression of ln( y ) on x , a computation that does not require iterative optimization. However, use of a nonlinear transformation requires caution. The influences of the data values will change, as will the error structure of

546-593: A backtracking line search such as Armijo-line search . Typically, α {\displaystyle \alpha } should be chosen such that it satisfies the Wolfe conditions or the Goldstein conditions . In cases where the direction of the shift vector is such that the optimal fraction α is close to zero, an alternative method for handling divergence is the use of the Levenberg–Marquardt algorithm ,

624-556: A linear combination of the two β {\displaystyle \beta } s. Systematic error may be present in the independent variables but its treatment is outside the scope of regression analysis. If the independent variables are not error-free, this is an errors-in-variables model , also outside this scope. Other examples of nonlinear functions include exponential functions , logarithmic functions , trigonometric functions , power functions , Gaussian function , and Lorentz distributions . Some functions, such as

702-481: A linear map and act on row and column vectors as the linear map's transformation matrix . For a row vector v , the product v M is another row vector p : v M = p . {\displaystyle \mathbf {v} M=\mathbf {p} \,.} Another n × n matrix Q can act on p , p Q = t . {\displaystyle \mathbf {p} Q=\mathbf {t} \,.} Then one can write t = p Q = v MQ , so

780-558: A row vector is a 1 × n {\displaystyle 1\times n} matrix for some ⁠ n {\displaystyle n} ⁠ , consisting of a single row of ⁠ n {\displaystyle n} ⁠ entries, a = [ a 1 a 2 … a n ] . {\displaystyle {\boldsymbol {a}}={\begin{bmatrix}a_{1}&a_{2}&\dots &a_{n}\end{bmatrix}}.} (Throughout this article, boldface

858-447: A trust region method. The normal equations are modified in such a way that the increment vector is rotated towards the direction of steepest descent , ( J T J + λ D ) Δ = − J T r , {\displaystyle \left(\mathbf {J^{\operatorname {T} }J+\lambda D} \right)\Delta =-\mathbf {J} ^{\operatorname {T} }\mathbf {r} ,} where D

SECTION 10

#1732791159722

936-399: A column and a row vector gives the outer product of two vectors a , b , an example of the more general tensor product . The matrix product of the column vector representation of a and the row vector representation of b gives the components of their dyadic product, a ⊗ b = a b ⊺ = [ a 1

1014-725: A consequence, the rate of convergence of the Gauss–Newton algorithm can be quadratic under certain regularity conditions. In general (under weaker conditions), the convergence rate is linear. The recurrence relation for Newton's method for minimizing a function S of parameters β {\displaystyle {\boldsymbol {\beta }}} is β ( s + 1 ) = β ( s ) − H − 1 g , {\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\mathbf {H} ^{-1}\mathbf {g} ,} where g denotes

1092-424: A coordinate space, is equal to the matrix product of the transpose of a with b , a ⋅ b = a ⊺ b = [ a 1 ⋯ a n ] [ b 1 ⋮ b n ] = a 1 b 1 + ⋯ +

1170-419: A given field (such as the real numbers ) forms an n -dimensional vector space ; similarly, the set of all column vectors with m entries forms an m -dimensional vector space. The space of row vectors with n entries can be regarded as the dual space of the space of column vectors with n entries, since any linear functional on the space of column vectors can be represented as the left-multiplication of

1248-484: A part of the way will decrease the objective function S . An optimal value for α {\displaystyle \alpha } can be found by using a line search algorithm, that is, the magnitude of α {\displaystyle \alpha } is determined by finding the value that minimizes S , usually using a direct search method in the interval 0 < α < 1 {\displaystyle 0<\alpha <1} or

1326-542: A small local minimum, the Gauss-Newton iteration linearly converges to x ^ {\displaystyle {\hat {\mathbf {x} }}} . The point x ^ {\displaystyle {\hat {\mathbf {x} }}} is often called a least squares solution of the overdetermined system. In what follows, the Gauss–Newton algorithm will be derived from Newton's method for function optimization via an approximation. As

1404-727: A unique row vector. To simplify writing column vectors in-line with other text, sometimes they are written as row vectors with the transpose operation applied to them. x = [ x 1 x 2 … x m ] T {\displaystyle {\boldsymbol {x}}={\begin{bmatrix}x_{1}\;x_{2}\;\dots \;x_{m}\end{bmatrix}}^{\rm {T}}} or x = [ x 1 , x 2 , … , x m ] T {\displaystyle {\boldsymbol {x}}={\begin{bmatrix}x_{1},x_{2},\dots ,x_{m}\end{bmatrix}}^{\rm {T}}} Some authors also use

1482-542: Is a linear least-squares problem, which can be solved explicitly, yielding the normal equations in the algorithm. The normal equations are n simultaneous linear equations in the unknown increments Δ {\displaystyle \Delta } . They may be solved in one step, using Cholesky decomposition , or, better, the QR factorization of J r {\displaystyle \mathbf {J_{r}} } . For large systems, an iterative method , such as

1560-508: Is a direct generalization of Newton's method in one dimension. In data fitting, where the goal is to find the parameters β {\displaystyle {\boldsymbol {\beta }}} such that a given model function f ( x , β ) {\displaystyle \mathbf {f} (\mathbf {x} ,{\boldsymbol {\beta }})} best fits some data points ( x i , y i ) {\displaystyle (x_{i},y_{i})} ,

1638-605: Is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations (iterations). In nonlinear regression, a statistical model of the form, relates a vector of independent variables , x {\displaystyle \mathbf {x} } , and its associated observed dependent variables , y {\displaystyle \mathbf {y} } . The function f {\displaystyle f}

SECTION 20

#1732791159722

1716-1019: Is a positive diagonal matrix. Note that when D is the identity matrix I and λ → + ∞ {\displaystyle \lambda \to +\infty } , then λ Δ = λ ( J T J + λ I ) − 1 ( − J T r ) = ( I − J T J / λ + ⋯ ) ( − J T r ) → − J T r {\displaystyle \lambda \Delta =\lambda \left(\mathbf {J^{\operatorname {T} }J} +\lambda \mathbf {I} \right)^{-1}\left(-\mathbf {J} ^{\operatorname {T} }\mathbf {r} \right)=\left(\mathbf {I} -\mathbf {J^{\operatorname {T} }J} /\lambda +\cdots \right)\left(-\mathbf {J} ^{\operatorname {T} }\mathbf {r} \right)\to -\mathbf {J} ^{\operatorname {T} }\mathbf {r} } , therefore

1794-473: Is an m × 1 {\displaystyle m\times 1} matrix consisting of a single column of ⁠ m {\displaystyle m} ⁠ entries, for example, x = [ x 1 x 2 ⋮ x m ] . {\displaystyle {\boldsymbol {x}}={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{m}\end{bmatrix}}.} Similarly,

1872-816: Is an effective method for solving overdetermined systems of equations in the form of f ( x ) = 0 {\displaystyle \mathbf {f} (\mathbf {x} )=\mathbf {0} } with f ( x ) = [ f 1 ( x 1 , … , x n ) ⋮ f m ( x 1 , … , x n ) ] {\displaystyle \mathbf {f} (\mathbf {x} )={\begin{bmatrix}f_{1}(x_{1},\ldots ,x_{n})\\\vdots \\f_{m}(x_{1},\ldots ,x_{n})\end{bmatrix}}} and m > n {\displaystyle m>n} where J ( x ) † {\displaystyle J(\mathbf {x} )^{\dagger }}

1950-615: Is at β = 0 {\displaystyle \beta =0} . (Actually the optimum is at β = − 1 {\displaystyle \beta =-1} for λ = 2 {\displaystyle \lambda =2} , because S ( 0 ) = 1 2 + ( − 1 ) 2 = 2 {\displaystyle S(0)=1^{2}+(-1)^{2}=2} , but S ( − 1 ) = 0 {\displaystyle S(-1)=0} .) If λ = 0 {\displaystyle \lambda =0} , then

2028-505: Is changed. A more efficient strategy is this: When divergence occurs, increase the Marquardt parameter until there is a decrease in S . Then retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached, when the Marquardt parameter can be set to zero; the minimization of S then becomes a standard Gauss–Newton minimization. Non-linear regression In statistics, nonlinear regression

2106-474: Is nonlinear in the components of the vector of parameters β {\displaystyle \beta } , but otherwise arbitrary. For example, the Michaelis–Menten model for enzyme kinetics has two parameters and one independent variable, related by f {\displaystyle f} by: This function, which is a rectangular hyperbola, is nonlinear because it cannot be expressed as

2184-1147: Is not known analytically, but needs to be linearly approximated from n + 1 {\displaystyle n+1} , or more, known values (where n {\displaystyle n} is the number of estimators), the best estimator is obtained directly from the Linear Template Fit as β ^ = ( ( Y M ~ ) T Ω − 1 Y M ~ ) − 1 ( Y M ~ ) T Ω − 1 ( d − Y m ¯ ) {\displaystyle {\hat {\boldsymbol {\beta }}}=((\mathbf {Y{\tilde {M}}} )^{\mathsf {T}}{\boldsymbol {\Omega }}^{-1}\mathbf {Y{\tilde {M}}} )^{-1}(\mathbf {Y{\tilde {M}}} )^{\mathsf {T}}{\boldsymbol {\Omega }}^{-1}(\mathbf {d} -\mathbf {Y{\bar {m}})} } (see also linear least squares ). The linear approximation introduces bias into

2262-629: Is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by H j k ≈ 2 ∑ i = 1 m J i j J i k , {\displaystyle H_{jk}\approx 2\sum _{i=1}^{m}J_{ij}J_{ik},} where J i j = ∂ r i / ∂ β j {\textstyle J_{ij}={\partial r_{i}}/{\partial \beta _{j}}} are entries of

2340-430: Is that the model can be approximated by a linear function, namely a first-order Taylor series : where J i j = ∂ f ( x i , β ) ∂ β j {\displaystyle J_{ij}={\frac {\partial f(x_{i},{\boldsymbol {\beta }})}{\partial \beta _{j}}}} are Jacobian matrix elements. It follows from this that

2418-527: Is the Moore-Penrose inverse (also known as pseudoinverse ) of the Jacobian matrix J ( x ) {\displaystyle J(\mathbf {x} )} of f ( x ) {\displaystyle \mathbf {f} (\mathbf {x} )} . It can be considered an extension of Newton's method and enjoys the same local quadratic convergence toward isolated regular solutions. If

Gauss–Newton algorithm - Misplaced Pages Continue

2496-445: Is the transpose of the matrix product of the column vector representation of b and the row vector representation of a , b ⊗ a = b a ⊺ = [ b 1 b 2 b 3 ] [ a 1 a 2 a 3 ] = [ b 1

2574-488: Is the left pseudoinverse of J f {\displaystyle \mathbf {J_{f}} } . The assumption m ≥ n in the algorithm statement is necessary, as otherwise the matrix J r T J r {\displaystyle \mathbf {J_{r}} ^{T}\mathbf {J_{r}} } is not invertible and the normal equations cannot be solved (at least uniquely). The Gauss–Newton algorithm can be derived by linearly approximating

2652-453: Is to employ a fraction α {\displaystyle \alpha } of the increment vector Δ in the updating formula: β s + 1 = β s + α Δ . {\displaystyle {\boldsymbol {\beta }}^{s+1}={\boldsymbol {\beta }}^{s}+\alpha \Delta .} In other words, the increment vector is too long, but it still points "downhill", so going just

2730-1053: Is used for both row and column vectors.) The transpose (indicated by T ) of any row vector is a column vector, and the transpose of any column vector is a row vector: [ x 1 x 2 … x m ] T = [ x 1 x 2 ⋮ x m ] {\displaystyle {\begin{bmatrix}x_{1}\;x_{2}\;\dots \;x_{m}\end{bmatrix}}^{\rm {T}}={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{m}\end{bmatrix}}} and [ x 1 x 2 ⋮ x m ] T = [ x 1 x 2 … x m ] . {\displaystyle {\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{m}\end{bmatrix}}^{\rm {T}}={\begin{bmatrix}x_{1}\;x_{2}\;\dots \;x_{m}\end{bmatrix}}.} The set of all row vectors with n entries in

2808-608: Is very sensitive to data error and is strongly biased toward fitting the data in a particular range of the independent variable, [ S ], its use is strongly discouraged. For error distributions that belong to the exponential family , a link function may be used to transform the parameters under the Generalized linear model framework. The independent or explanatory variable (say X) can be split up into classes or segments and linear regression can be performed per segment. Segmented regression with confidence analysis may yield

2886-513: The conjugate gradient method, may be more efficient. If there is a linear dependence between columns of J r , the iterations will fail, as J r T J r {\displaystyle \mathbf {J_{r}} ^{T}\mathbf {J_{r}} } becomes singular. When r {\displaystyle \mathbf {r} } is complex r : C n → C {\displaystyle \mathbf {r} :\mathbb {C} ^{n}\to \mathbb {C} }

2964-468: The direction of Δ approaches the direction of the negative gradient − J T r {\displaystyle -\mathbf {J} ^{\operatorname {T} }\mathbf {r} } . The so-called Marquardt parameter λ {\displaystyle \lambda } may also be optimized by a line search, but this is inefficient, as the shift vector must be recalculated every time λ {\displaystyle \lambda }

3042-615: The gradient vector of S , and H denotes the Hessian matrix of S . Since S = ∑ i = 1 m r i 2 {\textstyle S=\sum _{i=1}^{m}r_{i}^{2}} , the gradient is given by g j = 2 ∑ i = 1 m r i ∂ r i ∂ β j . {\displaystyle g_{j}=2\sum _{i=1}^{m}r_{i}{\frac {\partial r_{i}}{\partial \beta _{j}}}.} Elements of

3120-652: The matrix product transformation MQ maps v directly to t . Continuing with row vectors, matrix transformations further reconfiguring n -space can be applied to the right of previous outputs. When a column vector is transformed to another column vector under an n × n matrix action, the operation occurs to the left, p T = M v T , t T = Q p T , {\displaystyle \mathbf {p} ^{\mathrm {T} }=M\mathbf {v} ^{\mathrm {T} }\,,\quad \mathbf {t} ^{\mathrm {T} }=Q\mathbf {p} ^{\mathrm {T} },} leading to

3198-555: The Gauss–Newton algorithm iteratively finds the value of β {\displaystyle \beta } that minimize the sum of squares S ( β ) = ∑ i = 1 m r i ( β ) 2 . {\displaystyle S({\boldsymbol {\beta }})=\sum _{i=1}^{m}r_{i}({\boldsymbol {\beta }})^{2}.} Starting with an initial guess β ( 0 ) {\displaystyle {\boldsymbol {\beta }}^{(0)}} for

Gauss–Newton algorithm - Misplaced Pages Continue

3276-897: The Gauss–Newton algorithm can approach quadratic . The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix J r T J r {\displaystyle \mathbf {J_{r}^{\operatorname {T} }J_{r}} } is ill-conditioned . For example, consider the problem with m = 2 {\displaystyle m=2} equations and n = 1 {\displaystyle n=1} variable, given by r 1 ( β ) = β + 1 , r 2 ( β ) = λ β 2 + β − 1. {\displaystyle {\begin{aligned}r_{1}(\beta )&=\beta +1,\\r_{2}(\beta )&=\lambda \beta ^{2}+\beta -1.\end{aligned}}} The optimum

3354-632: The Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions. In a biology experiment studying the relation between substrate concentration [ S ] and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained. It is desired to find a curve (model function) of the form rate = V max ⋅ [ S ] K M + [ S ] {\displaystyle {\text{rate}}={\frac {V_{\text{max}}\cdot [S]}{K_{M}+[S]}}} that fits best

3432-728: The Gauss–Newton method is not guaranteed in all instances. The approximation | r i ∂ 2 r i ∂ β j ∂ β k | ≪ | ∂ r i ∂ β j ∂ r i ∂ β k | {\displaystyle \left|r_{i}{\frac {\partial ^{2}r_{i}}{\partial \beta _{j}\partial \beta _{k}}}\right|\ll \left|{\frac {\partial r_{i}}{\partial \beta _{j}}}{\frac {\partial r_{i}}{\partial \beta _{k}}}\right|} that needs to hold to be able to ignore

3510-925: The Hessian are calculated by differentiating the gradient elements, g j {\displaystyle g_{j}} , with respect to β k {\displaystyle \beta _{k}} : H j k = 2 ∑ i = 1 m ( ∂ r i ∂ β j ∂ r i ∂ β k + r i ∂ 2 r i ∂ β j ∂ β k ) . {\displaystyle H_{jk}=2\sum _{i=1}^{m}\left({\frac {\partial r_{i}}{\partial \beta _{j}}}{\frac {\partial r_{i}}{\partial \beta _{k}}}+r_{i}{\frac {\partial ^{2}r_{i}}{\partial \beta _{j}\partial \beta _{k}}}\right).} The Gauss–Newton method

3588-464: The Jacobian J r ( β ^ ) {\displaystyle \mathbf {J} _{\mathbf {r} }({\hat {\beta }})} is of full column rank, the initial iterate β ( 0 ) {\displaystyle \beta ^{(0)}} is near β ^ {\displaystyle {\hat {\beta }}} , and

3666-1081: The Jacobian J f = − J r {\displaystyle \mathbf {J_{f}} =-\mathbf {J_{r}} } of the function f {\displaystyle \mathbf {f} } as β ( s + 1 ) = β ( s ) + ( J f T J f ) − 1 J f T r ( β ( s ) ) . {\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}+\left(\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {J_{f}} \right)^{-1}\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right).} Note that ( J f T J f ) − 1 J f T {\displaystyle \left(\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {J_{f}} \right)^{-1}\mathbf {J_{f}} ^{\operatorname {T} }}

3744-753: The Jacobian J r . Note that when the exact hessian is evaluated near an exact fit we have near-zero r i {\displaystyle r_{i}} , so the second term becomes near-zero as well, which justifies the approximation. The gradient and the approximate Hessian can be written in matrix notation as g = 2 J r T r , H ≈ 2 J r T J r . {\displaystyle \mathbf {g} =2{\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {r} ,\quad \mathbf {H} \approx 2{\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {J_{r}} .} These expressions are substituted into

3822-453: The algorithm converges, then the limit is a stationary point of S . For large minimum value | S ( β ^ ) | {\displaystyle |S({\hat {\beta }})|} , however, convergence is not guaranteed, not even local convergence as in Newton's method , or convergence under the usual Wolfe conditions. The rate of convergence of

3900-505: The best-fitting parameters. Again in contrast to linear regression, there may be many local minima of the function to be optimized and even the global minimum may produce a biased estimate. In practice, estimated values of the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global minimum of a sum of squares. For details concerning nonlinear data modeling see least squares and non-linear least squares . The assumption underlying this procedure

3978-408: The conjugate form should be used: ( J r ¯ T J r ) − 1 J r ¯ T {\displaystyle \left({\overline {\mathbf {J_{r}} }}^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}{\overline {\mathbf {J_{r}} }}^{\operatorname {T} }} . In this example,

SECTION 50

#1732791159722

4056-427: The convention of writing both column vectors and row vectors as rows, but separating row vector elements with commas and column vector elements with semicolons (see alternative notation 2 in the table below). Matrix multiplication involves the action of multiplying each row vector of one matrix by each column vector of another matrix. The dot product of two column vectors a , b , considered as elements of

4134-681: The conventional matrix equation of form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , which can then be solved in a variety of methods (see Notes ). If m = n , the iteration simplifies to β ( s + 1 ) = β ( s ) − ( J r ) − 1 r ( β ( s ) ) , {\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\left(\mathbf {J_{r}} \right)^{-1}\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right),} which

4212-914: The data in the least-squares sense, with the parameters V max {\displaystyle V_{\text{max}}} and K M {\displaystyle K_{M}} to be determined. Denote by x i {\displaystyle x_{i}} and y i {\displaystyle y_{i}} the values of [ S ] and rate respectively, with i = 1 , … , 7 {\displaystyle i=1,\dots ,7} . Let β 1 = V max {\displaystyle \beta _{1}=V_{\text{max}}} and β 2 = K M {\displaystyle \beta _{2}=K_{M}} . We will find β 1 {\displaystyle \beta _{1}} and β 2 {\displaystyle \beta _{2}} such that

4290-411: The entries of the Jacobian matrix are ( J r ) i j = ∂ r i ( β ( s ) ) ∂ β j , {\displaystyle \left(\mathbf {J_{r}} \right)_{ij}={\frac {\partial r_{i}\left({\boldsymbol {\beta }}^{(s)}\right)}{\partial \beta _{j}}},} and

4368-432: The exponential or logarithmic functions, can be transformed so that they are linear. When so transformed, standard linear regression can be performed but must be applied with caution. See Linearization§Transformation , below, for more details. In general, there is no closed-form expression for the best-fitting parameters, as there is in linear regression . Usually numerical optimization algorithms are applied to determine

4446-424: The functions r i {\displaystyle r_{i}} are the residuals : r i ( β ) = y i − f ( x i , β ) . {\displaystyle r_{i}({\boldsymbol {\beta }})=y_{i}-f\left(x_{i},{\boldsymbol {\beta }}\right).} Then, the Gauss–Newton method can be expressed in terms of

4524-610: The initial estimates of β 1 = 0.9 {\displaystyle \beta _{1}=0.9} and β 2 = 0.2 {\displaystyle \beta _{2}=0.2} , after five iterations of the Gauss–Newton algorithm, the optimal values β ^ 1 = 0.362 {\displaystyle {\hat {\beta }}_{1}=0.362} and β ^ 2 = 0.556 {\displaystyle {\hat {\beta }}_{2}=0.556} are obtained. The sum of squares of residuals decreased from

4602-697: The initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters with the observed data. The Gauss-Newton iteration is guaranteed to converge toward a local minimum point β ^ {\displaystyle {\hat {\beta }}} under 4 conditions: The functions r 1 , … , r m {\displaystyle r_{1},\ldots ,r_{m}} are twice continuously differentiable in an open convex set D ∋ β ^ {\displaystyle D\ni {\hat {\beta }}} ,

4680-430: The least squares estimators are given by compare generalized least squares with covariance matrix proportional to the unit matrix. The nonlinear regression statistics are computed and used as in linear regression statistics, but using J in place of X in the formulas. When the function f ( x i , β ) {\displaystyle f(x_{i},{\boldsymbol {\beta }})} itself

4758-424: The local minimum value | S ( β ^ ) | {\displaystyle |S({\hat {\beta }})|} is small. The convergence is quadratic if | S ( β ^ ) | = 0 {\displaystyle |S({\hat {\beta }})|=0} . It can be shown that the increment Δ is a descent direction for S , and, if

SECTION 60

#1732791159722

4836-671: The minimum, the method proceeds by the iterations β ( s + 1 ) = β ( s ) − ( J r T J r ) − 1 J r T r ( β ( s ) ) , {\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\left(\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right),} where, if r and β are column vectors ,

4914-472: The model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, a nonlinear transformation may distribute the errors in a Gaussian fashion, so the choice to perform a nonlinear transformation must be informed by modeling considerations. For Michaelis–Menten kinetics , the linear Lineweaver–Burk plot of 1/ v against 1/[ S ] has been much used. However, since it

4992-608: The previous equation in the following two steps: With substitutions A = J r T J r {\textstyle A=\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} } , b = − J r T r ( β ( s ) ) {\displaystyle \mathbf {b} =-\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)} , and x = Δ {\displaystyle \mathbf {x} =\Delta } , this turns into

5070-728: The problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1, then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally. The Gauss-Newton iteration x ( k + 1 ) = x ( k ) − J ( x ( k ) ) † f ( x ( k ) ) , k = 0 , 1 , … {\displaystyle \mathbf {x} ^{(k+1)}=\mathbf {x} ^{(k)}-J(\mathbf {x} ^{(k)})^{\dagger }\mathbf {f} (\mathbf {x} ^{(k)})\,,\quad k=0,1,\ldots }

5148-432: The reciprocal of the variance of the observation, or the reciprocal of the dependent variable to some power in the outlier case , but weights may be recomputed on each iteration, in an iteratively weighted least squares algorithm. Some nonlinear regression problems can be moved to a linear domain by a suitable transformation of the model formulation. For example, consider the nonlinear regression problem with parameters

5226-626: The recurrence relation above to obtain the operational equations β ( s + 1 ) = β ( s ) + Δ ; Δ = − ( J r T J r ) − 1 J r T r . {\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}+\Delta ;\quad \Delta =-\left(\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} .} Convergence of

5304-468: The result that the dependent or response variable (say Y) behaves differently in the various segments. The figure shows that the soil salinity (X) initially exerts no influence on the crop yield (Y) of mustard, until a critical or threshold value ( breakpoint ), after which the yield is affected negatively. Column vectors In linear algebra , a column vector with ⁠ m {\displaystyle m} ⁠ elements

5382-850: The second-order derivative terms may be valid in two cases, for which convergence is to be expected: With the Gauss–Newton method the sum of squares of the residuals S may not decrease at every iteration. However, since Δ is a descent direction, unless S ( β s ) {\displaystyle S\left({\boldsymbol {\beta }}^{s}\right)} is a stationary point, it holds that S ( β s + α Δ ) < S ( β s ) {\displaystyle S\left({\boldsymbol {\beta }}^{s}+\alpha \Delta \right)<S\left({\boldsymbol {\beta }}^{s}\right)} for all sufficiently small α > 0 {\displaystyle \alpha >0} . Thus, if divergence occurs, one solution

5460-424: The solution doesn't exist but the initial iterate x ( 0 ) {\displaystyle \mathbf {x} ^{(0)}} is near a point x ^ = ( x ^ 1 , … , x ^ n ) {\displaystyle {\hat {\mathbf {x} }}=({\hat {x}}_{1},\ldots ,{\hat {x}}_{n})} at which

5538-503: The statistics. Therefore, more caution than usual is required in interpreting statistics derived from a nonlinear model. The best-fit curve is often assumed to be that which minimizes the sum of squared residuals . This is the ordinary least squares (OLS) approach. However, in cases where the dependent variable does not have constant variance, or there are some outliers, a sum of weighted squared residuals may be minimized; see weighted least squares . Each weight should ideally be equal to

5616-407: The sum of squares ∑ i = 1 m | f i ( x 1 , … , x n ) | 2 ≡ ‖ f ( x ) ‖ 2 2 {\textstyle \sum _{i=1}^{m}|f_{i}(x_{1},\ldots ,x_{n})|^{2}\equiv \|\mathbf {f} (\mathbf {x} )\|_{2}^{2}} reaches

5694-485: The sum of squares of the residuals r i = y i − β 1 x i β 2 + x i , ( i = 1 , … , 7 ) {\displaystyle r_{i}=y_{i}-{\frac {\beta _{1}x_{i}}{\beta _{2}+x_{i}}},\quad (i=1,\dots ,7)} is minimized. The Jacobian J r {\displaystyle \mathbf {J_{r}} } of

5772-449: The sum of squares of the right-hand side; i.e., min ‖ r ( β ( s ) ) + J r ( β ( s ) ) Δ ‖ 2 2 , {\displaystyle \min \left\|\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)+\mathbf {J_{r}} \left({\boldsymbol {\beta }}^{(s)}\right)\Delta \right\|_{2}^{2},}

5850-408: The symbol T {\displaystyle ^{\operatorname {T} }} denotes the matrix transpose . At each iteration, the update Δ = β ( s + 1 ) − β ( s ) {\displaystyle \Delta ={\boldsymbol {\beta }}^{(s+1)}-{\boldsymbol {\beta }}^{(s)}} can be found by rearranging

5928-679: The transpose of b with a , b ⋅ a = b ⊺ a = [ b 1 ⋯ b n ] [ a 1 ⋮ a n ] = a 1 b 1 + ⋯ + a n b n . {\displaystyle \mathbf {b} \cdot \mathbf {a} =\mathbf {b} ^{\intercal }\mathbf {a} ={\begin{bmatrix}b_{1}&\cdots &b_{n}\end{bmatrix}}{\begin{bmatrix}a_{1}\\\vdots \\a_{n}\end{bmatrix}}=a_{1}b_{1}+\cdots +a_{n}b_{n}\,.} The matrix product of

6006-787: The vector of functions r i . Using Taylor's theorem , we can write at every iteration: r ( β ) ≈ r ( β ( s ) ) + J r ( β ( s ) ) Δ {\displaystyle \mathbf {r} ({\boldsymbol {\beta }})\approx \mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)+\mathbf {J_{r}} \left({\boldsymbol {\beta }}^{(s)}\right)\Delta } with Δ = β − β ( s ) {\displaystyle \Delta ={\boldsymbol {\beta }}-{\boldsymbol {\beta }}^{(s)}} . The task of finding Δ {\displaystyle \Delta } minimizing

6084-1046: The vector of residuals r i {\displaystyle r_{i}} with respect to the unknowns β j {\displaystyle \beta _{j}} is a 7 × 2 {\displaystyle 7\times 2} matrix with the i {\displaystyle i} -th row having the entries ∂ r i ∂ β 1 = − x i β 2 + x i ; ∂ r i ∂ β 2 = β 1 ⋅ x i ( β 2 + x i ) 2 . {\displaystyle {\frac {\partial r_{i}}{\partial \beta _{1}}}=-{\frac {x_{i}}{\beta _{2}+x_{i}}};\quad {\frac {\partial r_{i}}{\partial \beta _{2}}}={\frac {\beta _{1}\cdot x_{i}}{\left(\beta _{2}+x_{i}\right)^{2}}}.} Starting with

#721278