Misplaced Pages

Gauss–Newton algorithm

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function . Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method for solving overdetermined systems of equations . It has the advantage that second derivatives, which can be challenging to compute, are not required.

#274725

48-1033: Non-linear least squares problems arise, for instance, in non-linear regression , where parameters in a model are sought such that the model is in good agreement with available observations. The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton , and first appeared in Gauss's 1809 work Theoria motus corporum coelestium in sectionibus conicis solem ambientum . Given m {\displaystyle m} functions r = ( r 1 , … , r m ) {\displaystyle {\textbf {r}}=(r_{1},\ldots ,r_{m})} (often called residuals) of n {\displaystyle n} variables β = ( β 1 , … β n ) , {\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\ldots \beta _{n}),} with m ≥ n , {\displaystyle m\geq n,}

96-593: A backtracking line search such as Armijo-line search . Typically, α {\displaystyle \alpha } should be chosen such that it satisfies the Wolfe conditions or the Goldstein conditions . In cases where the direction of the shift vector is such that the optimal fraction α is close to zero, an alternative method for handling divergence is the use of the Levenberg–Marquardt algorithm ,

144-447: A trust region method. The normal equations are modified in such a way that the increment vector is rotated towards the direction of steepest descent , ( J T J + λ D ) Δ = − J T r , {\displaystyle \left(\mathbf {J^{\operatorname {T} }J+\lambda D} \right)\Delta =-\mathbf {J} ^{\operatorname {T} }\mathbf {r} ,} where D

192-725: A consequence, the rate of convergence of the Gauss–Newton algorithm can be quadratic under certain regularity conditions. In general (under weaker conditions), the convergence rate is linear. The recurrence relation for Newton's method for minimizing a function S of parameters β {\displaystyle {\boldsymbol {\beta }}} is β ( s + 1 ) = β ( s ) − H − 1 g , {\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\mathbf {H} ^{-1}\mathbf {g} ,} where g denotes

240-484: A part of the way will decrease the objective function S . An optimal value for α {\displaystyle \alpha } can be found by using a line search algorithm, that is, the magnitude of α {\displaystyle \alpha } is determined by finding the value that minimizes S , usually using a direct search method in the interval 0 < α < 1 {\displaystyle 0<\alpha <1} or

288-542: A small local minimum, the Gauss-Newton iteration linearly converges to x ^ {\displaystyle {\hat {\mathbf {x} }}} . The point x ^ {\displaystyle {\hat {\mathbf {x} }}} is often called a least squares solution of the overdetermined system. In what follows, the Gauss–Newton algorithm will be derived from Newton's method for function optimization via an approximation. As

336-542: Is a linear least-squares problem, which can be solved explicitly, yielding the normal equations in the algorithm. The normal equations are n simultaneous linear equations in the unknown increments Δ {\displaystyle \Delta } . They may be solved in one step, using Cholesky decomposition , or, better, the QR factorization of J r {\displaystyle \mathbf {J_{r}} } . For large systems, an iterative method , such as

384-508: Is a direct generalization of Newton's method in one dimension. In data fitting, where the goal is to find the parameters β {\displaystyle {\boldsymbol {\beta }}} such that a given model function f ( x , β ) {\displaystyle \mathbf {f} (\mathbf {x} ,{\boldsymbol {\beta }})} best fits some data points ( x i , y i ) {\displaystyle (x_{i},y_{i})} ,

432-1019: Is a positive diagonal matrix. Note that when D is the identity matrix I and λ → + ∞ {\displaystyle \lambda \to +\infty } , then λ Δ = λ ( J T J + λ I ) − 1 ( − J T r ) = ( I − J T J / λ + ⋯ ) ( − J T r ) → − J T r {\displaystyle \lambda \Delta =\lambda \left(\mathbf {J^{\operatorname {T} }J} +\lambda \mathbf {I} \right)^{-1}\left(-\mathbf {J} ^{\operatorname {T} }\mathbf {r} \right)=\left(\mathbf {I} -\mathbf {J^{\operatorname {T} }J} /\lambda +\cdots \right)\left(-\mathbf {J} ^{\operatorname {T} }\mathbf {r} \right)\to -\mathbf {J} ^{\operatorname {T} }\mathbf {r} } , therefore

480-816: Is an effective method for solving overdetermined systems of equations in the form of f ( x ) = 0 {\displaystyle \mathbf {f} (\mathbf {x} )=\mathbf {0} } with f ( x ) = [ f 1 ( x 1 , … , x n ) ⋮ f m ( x 1 , … , x n ) ] {\displaystyle \mathbf {f} (\mathbf {x} )={\begin{bmatrix}f_{1}(x_{1},\ldots ,x_{n})\\\vdots \\f_{m}(x_{1},\ldots ,x_{n})\end{bmatrix}}} and m > n {\displaystyle m>n} where J ( x ) † {\displaystyle J(\mathbf {x} )^{\dagger }}

528-615: Is at β = 0 {\displaystyle \beta =0} . (Actually the optimum is at β = − 1 {\displaystyle \beta =-1} for λ = 2 {\displaystyle \lambda =2} , because S ( 0 ) = 1 2 + ( − 1 ) 2 = 2 {\displaystyle S(0)=1^{2}+(-1)^{2}=2} , but S ( − 1 ) = 0 {\displaystyle S(-1)=0} .) If λ = 0 {\displaystyle \lambda =0} , then

SECTION 10

#1732780472275

576-515: Is changed. A more efficient strategy is this: When divergence occurs, increase the Marquardt parameter until there is a decrease in S . Then retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached, when the Marquardt parameter can be set to zero; the minimization of S then becomes a standard Gauss–Newton minimization. Non-linear regression Too Many Requests If you report this error to

624-629: Is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated by H j k ≈ 2 ∑ i = 1 m J i j J i k , {\displaystyle H_{jk}\approx 2\sum _{i=1}^{m}J_{ij}J_{ik},} where J i j = ∂ r i / ∂ β j {\textstyle J_{ij}={\partial r_{i}}/{\partial \beta _{j}}} are entries of

672-527: Is the Moore-Penrose inverse (also known as pseudoinverse ) of the Jacobian matrix J ( x ) {\displaystyle J(\mathbf {x} )} of f ( x ) {\displaystyle \mathbf {f} (\mathbf {x} )} . It can be considered an extension of Newton's method and enjoys the same local quadratic convergence toward isolated regular solutions. If

720-488: Is the left pseudoinverse of J f {\displaystyle \mathbf {J_{f}} } . The assumption m ≥ n in the algorithm statement is necessary, as otherwise the matrix J r T J r {\displaystyle \mathbf {J_{r}} ^{T}\mathbf {J_{r}} } is not invertible and the normal equations cannot be solved (at least uniquely). The Gauss–Newton algorithm can be derived by linearly approximating

768-453: Is to employ a fraction α {\displaystyle \alpha } of the increment vector Δ in the updating formula: β s + 1 = β s + α Δ . {\displaystyle {\boldsymbol {\beta }}^{s+1}={\boldsymbol {\beta }}^{s}+\alpha \Delta .} In other words, the increment vector is too long, but it still points "downhill", so going just

816-513: The conjugate gradient method, may be more efficient. If there is a linear dependence between columns of J r , the iterations will fail, as J r T J r {\displaystyle \mathbf {J_{r}} ^{T}\mathbf {J_{r}} } becomes singular. When r {\displaystyle \mathbf {r} } is complex r : C n → C {\displaystyle \mathbf {r} :\mathbb {C} ^{n}\to \mathbb {C} }

864-468: The direction of Δ approaches the direction of the negative gradient − J T r {\displaystyle -\mathbf {J} ^{\operatorname {T} }\mathbf {r} } . The so-called Marquardt parameter λ {\displaystyle \lambda } may also be optimized by a line search, but this is inefficient, as the shift vector must be recalculated every time λ {\displaystyle \lambda }

912-615: The gradient vector of S , and H denotes the Hessian matrix of S . Since S = ∑ i = 1 m r i 2 {\textstyle S=\sum _{i=1}^{m}r_{i}^{2}} , the gradient is given by g j = 2 ∑ i = 1 m r i ∂ r i ∂ β j . {\displaystyle g_{j}=2\sum _{i=1}^{m}r_{i}{\frac {\partial r_{i}}{\partial \beta _{j}}}.} Elements of

960-555: The Gauss–Newton algorithm iteratively finds the value of β {\displaystyle \beta } that minimize the sum of squares S ( β ) = ∑ i = 1 m r i ( β ) 2 . {\displaystyle S({\boldsymbol {\beta }})=\sum _{i=1}^{m}r_{i}({\boldsymbol {\beta }})^{2}.} Starting with an initial guess β ( 0 ) {\displaystyle {\boldsymbol {\beta }}^{(0)}} for

1008-897: The Gauss–Newton algorithm can approach quadratic . The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrix J r T J r {\displaystyle \mathbf {J_{r}^{\operatorname {T} }J_{r}} } is ill-conditioned . For example, consider the problem with m = 2 {\displaystyle m=2} equations and n = 1 {\displaystyle n=1} variable, given by r 1 ( β ) = β + 1 , r 2 ( β ) = λ β 2 + β − 1. {\displaystyle {\begin{aligned}r_{1}(\beta )&=\beta +1,\\r_{2}(\beta )&=\lambda \beta ^{2}+\beta -1.\end{aligned}}} The optimum

SECTION 20

#1732780472275

1056-632: The Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions. In a biology experiment studying the relation between substrate concentration [ S ] and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained. It is desired to find a curve (model function) of the form rate = V max ⋅ [ S ] K M + [ S ] {\displaystyle {\text{rate}}={\frac {V_{\text{max}}\cdot [S]}{K_{M}+[S]}}} that fits best

1104-728: The Gauss–Newton method is not guaranteed in all instances. The approximation | r i ∂ 2 r i ∂ β j ∂ β k | ≪ | ∂ r i ∂ β j ∂ r i ∂ β k | {\displaystyle \left|r_{i}{\frac {\partial ^{2}r_{i}}{\partial \beta _{j}\partial \beta _{k}}}\right|\ll \left|{\frac {\partial r_{i}}{\partial \beta _{j}}}{\frac {\partial r_{i}}{\partial \beta _{k}}}\right|} that needs to hold to be able to ignore

1152-925: The Hessian are calculated by differentiating the gradient elements, g j {\displaystyle g_{j}} , with respect to β k {\displaystyle \beta _{k}} : H j k = 2 ∑ i = 1 m ( ∂ r i ∂ β j ∂ r i ∂ β k + r i ∂ 2 r i ∂ β j ∂ β k ) . {\displaystyle H_{jk}=2\sum _{i=1}^{m}\left({\frac {\partial r_{i}}{\partial \beta _{j}}}{\frac {\partial r_{i}}{\partial \beta _{k}}}+r_{i}{\frac {\partial ^{2}r_{i}}{\partial \beta _{j}\partial \beta _{k}}}\right).} The Gauss–Newton method

1200-464: The Jacobian J r ( β ^ ) {\displaystyle \mathbf {J} _{\mathbf {r} }({\hat {\beta }})} is of full column rank, the initial iterate β ( 0 ) {\displaystyle \beta ^{(0)}} is near β ^ {\displaystyle {\hat {\beta }}} , and

1248-1081: The Jacobian J f = − J r {\displaystyle \mathbf {J_{f}} =-\mathbf {J_{r}} } of the function f {\displaystyle \mathbf {f} } as β ( s + 1 ) = β ( s ) + ( J f T J f ) − 1 J f T r ( β ( s ) ) . {\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}+\left(\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {J_{f}} \right)^{-1}\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right).} Note that ( J f T J f ) − 1 J f T {\displaystyle \left(\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {J_{f}} \right)^{-1}\mathbf {J_{f}} ^{\operatorname {T} }}

1296-753: The Jacobian J r . Note that when the exact hessian is evaluated near an exact fit we have near-zero r i {\displaystyle r_{i}} , so the second term becomes near-zero as well, which justifies the approximation. The gradient and the approximate Hessian can be written in matrix notation as g = 2 J r T r , H ≈ 2 J r T J r . {\displaystyle \mathbf {g} =2{\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {r} ,\quad \mathbf {H} \approx 2{\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {J_{r}} .} These expressions are substituted into

1344-453: The algorithm converges, then the limit is a stationary point of S . For large minimum value | S ( β ^ ) | {\displaystyle |S({\hat {\beta }})|} , however, convergence is not guaranteed, not even local convergence as in Newton's method , or convergence under the usual Wolfe conditions. The rate of convergence of

1392-408: The conjugate form should be used: ( J r ¯ T J r ) − 1 J r ¯ T {\displaystyle \left({\overline {\mathbf {J_{r}} }}^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}{\overline {\mathbf {J_{r}} }}^{\operatorname {T} }} . In this example,

1440-681: The conventional matrix equation of form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , which can then be solved in a variety of methods (see Notes ). If m = n , the iteration simplifies to β ( s + 1 ) = β ( s ) − ( J r ) − 1 r ( β ( s ) ) , {\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\left(\mathbf {J_{r}} \right)^{-1}\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right),} which

1488-914: The data in the least-squares sense, with the parameters V max {\displaystyle V_{\text{max}}} and K M {\displaystyle K_{M}} to be determined. Denote by x i {\displaystyle x_{i}} and y i {\displaystyle y_{i}} the values of [ S ] and rate respectively, with i = 1 , … , 7 {\displaystyle i=1,\dots ,7} . Let β 1 = V max {\displaystyle \beta _{1}=V_{\text{max}}} and β 2 = K M {\displaystyle \beta _{2}=K_{M}} . We will find β 1 {\displaystyle \beta _{1}} and β 2 {\displaystyle \beta _{2}} such that

Gauss–Newton algorithm - Misplaced Pages Continue

1536-411: The entries of the Jacobian matrix are ( J r ) i j = ∂ r i ( β ( s ) ) ∂ β j , {\displaystyle \left(\mathbf {J_{r}} \right)_{ij}={\frac {\partial r_{i}\left({\boldsymbol {\beta }}^{(s)}\right)}{\partial \beta _{j}}},} and

1584-424: The functions r i {\displaystyle r_{i}} are the residuals : r i ( β ) = y i − f ( x i , β ) . {\displaystyle r_{i}({\boldsymbol {\beta }})=y_{i}-f\left(x_{i},{\boldsymbol {\beta }}\right).} Then, the Gauss–Newton method can be expressed in terms of

1632-610: The initial estimates of β 1 = 0.9 {\displaystyle \beta _{1}=0.9} and β 2 = 0.2 {\displaystyle \beta _{2}=0.2} , after five iterations of the Gauss–Newton algorithm, the optimal values β ^ 1 = 0.362 {\displaystyle {\hat {\beta }}_{1}=0.362} and β ^ 2 = 0.556 {\displaystyle {\hat {\beta }}_{2}=0.556} are obtained. The sum of squares of residuals decreased from

1680-697: The initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters with the observed data. The Gauss-Newton iteration is guaranteed to converge toward a local minimum point β ^ {\displaystyle {\hat {\beta }}} under 4 conditions: The functions r 1 , … , r m {\displaystyle r_{1},\ldots ,r_{m}} are twice continuously differentiable in an open convex set D ∋ β ^ {\displaystyle D\ni {\hat {\beta }}} ,

1728-424: The local minimum value | S ( β ^ ) | {\displaystyle |S({\hat {\beta }})|} is small. The convergence is quadratic if | S ( β ^ ) | = 0 {\displaystyle |S({\hat {\beta }})|=0} . It can be shown that the increment Δ is a descent direction for S , and, if

1776-671: The minimum, the method proceeds by the iterations β ( s + 1 ) = β ( s ) − ( J r T J r ) − 1 J r T r ( β ( s ) ) , {\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\left(\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right),} where, if r and β are column vectors ,

1824-608: The previous equation in the following two steps: With substitutions A = J r T J r {\textstyle A=\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} } , b = − J r T r ( β ( s ) ) {\displaystyle \mathbf {b} =-\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)} , and x = Δ {\displaystyle \mathbf {x} =\Delta } , this turns into

1872-728: The problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1, then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally. The Gauss-Newton iteration x ( k + 1 ) = x ( k ) − J ( x ( k ) ) † f ( x ( k ) ) , k = 0 , 1 , … {\displaystyle \mathbf {x} ^{(k+1)}=\mathbf {x} ^{(k)}-J(\mathbf {x} ^{(k)})^{\dagger }\mathbf {f} (\mathbf {x} ^{(k)})\,,\quad k=0,1,\ldots }

1920-626: The recurrence relation above to obtain the operational equations β ( s + 1 ) = β ( s ) + Δ ; Δ = − ( J r T J r ) − 1 J r T r . {\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}+\Delta ;\quad \Delta =-\left(\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} .} Convergence of

1968-850: The second-order derivative terms may be valid in two cases, for which convergence is to be expected: With the Gauss–Newton method the sum of squares of the residuals S may not decrease at every iteration. However, since Δ is a descent direction, unless S ( β s ) {\displaystyle S\left({\boldsymbol {\beta }}^{s}\right)} is a stationary point, it holds that S ( β s + α Δ ) < S ( β s ) {\displaystyle S\left({\boldsymbol {\beta }}^{s}+\alpha \Delta \right)<S\left({\boldsymbol {\beta }}^{s}\right)} for all sufficiently small α > 0 {\displaystyle \alpha >0} . Thus, if divergence occurs, one solution

Gauss–Newton algorithm - Misplaced Pages Continue

2016-424: The solution doesn't exist but the initial iterate x ( 0 ) {\displaystyle \mathbf {x} ^{(0)}} is near a point x ^ = ( x ^ 1 , … , x ^ n ) {\displaystyle {\hat {\mathbf {x} }}=({\hat {x}}_{1},\ldots ,{\hat {x}}_{n})} at which

2064-407: The sum of squares ∑ i = 1 m | f i ( x 1 , … , x n ) | 2 ≡ ‖ f ( x ) ‖ 2 2 {\textstyle \sum _{i=1}^{m}|f_{i}(x_{1},\ldots ,x_{n})|^{2}\equiv \|\mathbf {f} (\mathbf {x} )\|_{2}^{2}} reaches

2112-485: The sum of squares of the residuals r i = y i − β 1 x i β 2 + x i , ( i = 1 , … , 7 ) {\displaystyle r_{i}=y_{i}-{\frac {\beta _{1}x_{i}}{\beta _{2}+x_{i}}},\quad (i=1,\dots ,7)} is minimized. The Jacobian J r {\displaystyle \mathbf {J_{r}} } of

2160-449: The sum of squares of the right-hand side; i.e., min ‖ r ( β ( s ) ) + J r ( β ( s ) ) Δ ‖ 2 2 , {\displaystyle \min \left\|\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)+\mathbf {J_{r}} \left({\boldsymbol {\beta }}^{(s)}\right)\Delta \right\|_{2}^{2},}

2208-408: The symbol T {\displaystyle ^{\operatorname {T} }} denotes the matrix transpose . At each iteration, the update Δ = β ( s + 1 ) − β ( s ) {\displaystyle \Delta ={\boldsymbol {\beta }}^{(s+1)}-{\boldsymbol {\beta }}^{(s)}} can be found by rearranging

2256-787: The vector of functions r i . Using Taylor's theorem , we can write at every iteration: r ( β ) ≈ r ( β ( s ) ) + J r ( β ( s ) ) Δ {\displaystyle \mathbf {r} ({\boldsymbol {\beta }})\approx \mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)+\mathbf {J_{r}} \left({\boldsymbol {\beta }}^{(s)}\right)\Delta } with Δ = β − β ( s ) {\displaystyle \Delta ={\boldsymbol {\beta }}-{\boldsymbol {\beta }}^{(s)}} . The task of finding Δ {\displaystyle \Delta } minimizing

2304-1046: The vector of residuals r i {\displaystyle r_{i}} with respect to the unknowns β j {\displaystyle \beta _{j}} is a 7 × 2 {\displaystyle 7\times 2} matrix with the i {\displaystyle i} -th row having the entries ∂ r i ∂ β 1 = − x i β 2 + x i ; ∂ r i ∂ β 2 = β 1 ⋅ x i ( β 2 + x i ) 2 . {\displaystyle {\frac {\partial r_{i}}{\partial \beta _{1}}}=-{\frac {x_{i}}{\beta _{2}+x_{i}}};\quad {\frac {\partial r_{i}}{\partial \beta _{2}}}={\frac {\beta _{1}\cdot x_{i}}{\left(\beta _{2}+x_{i}\right)^{2}}}.} Starting with

#274725