In numerical analysis , the Dormand–Prince ( RKDP ) method or DOPRI method , is an embedded method for solving ordinary differential equations (ODE). The method is a member of the Runge–Kutta family of ODE solvers. More specifically, it uses six function evaluations to calculate fourth- and fifth-order accurate solutions. The difference between these solutions is then taken to be the error of the (fourth-order) solution. This error estimate is very convenient for adaptive stepsize integration algorithms. Other similar integration methods are Fehlberg (RKF) and Cash–Karp (RKCK).
47-459: The Dormand–Prince method has seven stages, but it uses only six function evaluations per step because it has the "First Same As Last" (FSAL) property: the last stage is evaluated at the same point as the first stage of the next step. Dormand and Prince chose the coefficients of their method to minimize the error of the fifth-order solution. This is the main difference with the Fehlberg method, which
94-424: A 1 λ k − 1 − a 2 λ k − 2 − ⋯ − a k − 1 λ − a k = 0 {\displaystyle \lambda ^{k}-a_{1}\lambda ^{k-1}-a_{2}\lambda ^{k-2}-\cdots -a_{k-1}\lambda -a_{k}=0} to obtain the latter's k solutions, which are
141-1331: A i j 3 − 3 6 0 0 0 3 + 3 6 2 + 3 12 0 0 3 − 3 6 0 − 3 6 0 b i ¯ 5 + 3 3 24 3 − 3 12 1 − 3 24 b i 3 + 2 3 12 1 2 3 − 2 3 12 {\displaystyle {\begin{array}{c|ccc}c_{i}&&a_{ij}&\\{\frac {3-{\sqrt {3}}}{6}}&0&0&0\\{\frac {3+{\sqrt {3}}}{6}}&{\frac {2+{\sqrt {3}}}{12}}&0&0\\{\frac {3-{\sqrt {3}}}{6}}&0&-{\frac {\sqrt {3}}{6}}&0\\\hline {\overline {b_{i}}}&{\frac {5+3{\sqrt {3}}}{24}}&{\frac {3-{\sqrt {3}}}{12}}&{\frac {1-{\sqrt {3}}}{24}}\\\hline b_{i}&{\frac {3+2{\sqrt {3}}}{12}}&{\frac {1}{2}}&{\frac {3-2{\sqrt {3}}}{12}}\end{array}}} Initial conditions In mathematics and particularly in dynamic systems , an initial condition , in some contexts called
188-482: A k − 1 λ k − 1 + ⋯ + a 1 λ + a 0 = 0 , {\displaystyle \lambda ^{k}+a_{k-1}\lambda ^{k-1}+\cdots +a_{1}\lambda +a_{0}=0,} whose solutions are the characteristic values λ 1 , … , λ k ; {\displaystyle \lambda _{1},\dots ,\lambda _{k};} these are used in
235-1045: A s 2 … a s s b ¯ 1 b ¯ 2 … b ¯ s b 1 b 2 … b s = c A b ¯ ⊤ b ⊤ {\displaystyle {\begin{array}{c|cccc}c_{1}&a_{11}&a_{12}&\dots &a_{1s}\\c_{2}&a_{21}&a_{22}&\dots &a_{2s}\\\vdots &\vdots &\vdots &\ddots &\vdots \\c_{s}&a_{s1}&a_{s2}&\dots &a_{ss}\\\hline &{\bar {b}}_{1}&{\bar {b}}_{2}&\dots &{\bar {b}}_{s}\\&b_{1}&b_{2}&\dots &b_{s}\\\end{array}}={\begin{array}{c|c}\mathbf {c} &\mathbf {A} \\\hline &\mathbf {\bar {b}} ^{\top }\\&\mathbf {b} ^{\top }\end{array}}} . Two fourth-order explicit RKN methods are given by
282-486: A seed value , is a value of an evolving variable at some point in time designated as the initial time (typically denoted t = 0). For a system of order k (the number of time lags in discrete time , or the order of the largest derivative in continuous time ) and dimension n (that is, with n different evolving variables, which together can be denoted by an n -dimensional coordinate vector ), generally nk initial conditions are needed in order to trace
329-409: A Butcher table with the form c 1 a 11 a 12 … a 1 s c 2 a 21 a 22 … a 2 s ⋮ ⋮ ⋮ ⋱ ⋮ c s a s 1
376-721: A family of implicit and explicit iterative methods, which include the Euler method , used in temporal discretization for the approximate solutions of simultaneous nonlinear equations . These methods were developed around 1900 by the German mathematicians Carl Runge and Wilhelm Kutta . The most widely known member of the Runge–Kutta family is generally referred to as "RK4", the "classic Runge–Kutta method" or simply as "the Runge–Kutta method". Let an initial value problem be specified as follows: Here y {\displaystyle y}
423-448: A minimum of 9 and 11 stages, respectively. An example of an explicit method of order 6 with 7 stages can be found in Ref. Explicit methods of order 7 with 9 stages and explicit methods of order 8 with 11 stages are also known. See Refs. for a summary. The RK4 method falls in this framework. Its tableau is A slight variation of "the" Runge–Kutta method is also due to Kutta in 1901 and
470-480: A step with the higher-order method. During the integration, the step size is adapted such that the estimated error stays below a user-defined threshold: If the error is too high, a step is repeated with a lower step size; if the error is much smaller, the step size is increased to save time. This results in an (almost), optimal step size, which saves computation time. Moreover, the user does not have to spend time on finding an appropriate step size. The lower-order step
517-514: A step-size h > 0 and define: for n = 0, 1, 2, 3, ..., using ( Note: the above equations have different but equivalent definitions in different texts. ) Here y n + 1 {\displaystyle y_{n+1}} is the RK4 approximation of y ( t n + 1 ) {\displaystyle y(t_{n+1})} , and the next value ( y n + 1 {\displaystyle y_{n+1}} )
SECTION 10
#1732798169906564-399: Is In this family, α = 1 2 {\displaystyle \alpha ={\tfrac {1}{2}}} gives the midpoint method , α = 1 {\displaystyle \alpha =1} is Heun's method , and α = 2 3 {\displaystyle \alpha ={\tfrac {2}{3}}} is Ralston's method. As an example, consider
611-424: Is n = 1 and the order is k , so the necessary number of initial conditions to trace the system through time, either iteratively or via closed form solution, is nk = k . Again the initial conditions do not affect the qualitative nature of the variable's long-term evolution. The solution of this equation is found by using its characteristic equation λ k −
658-468: Is a generalization of the RK4 method mentioned above. It is given by where To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients a ij (for 1 ≤ j < i ≤ s ), b i (for i = 1, 2, ..., s ) and c i (for i = 2, 3, ..., s ). The matrix [ a ij ] is called the Runge–Kutta matrix , while the b i and c i are known as
705-442: Is an unknown function (scalar or vector) of time t {\displaystyle t} , which we would like to approximate; we are told that d y d t {\displaystyle {\frac {dy}{dt}}} , the rate at which y {\displaystyle y} changes, is a function of t {\displaystyle t} and of y {\displaystyle y} itself. At
752-538: Is called the 3/8-rule. The primary advantage this method has is that almost all of the error coefficients are smaller than in the popular method, but it requires slightly more FLOPs (floating-point operations) per time step. Its Butcher tableau is However, the simplest Runge–Kutta method is the (forward) Euler method , given by the formula y n + 1 = y n + h f ( t n , y n ) {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n})} . This
799-447: Is determined by the present value ( y n {\displaystyle y_{n}} ) plus the weighted average of four increments, where each increment is the product of the size of the interval, h , and an estimated slope specified by function f on the right-hand side of the differential equation. In averaging the four slopes, greater weight is given to the slopes at the midpoint. If f {\displaystyle f}
846-462: Is for an explicit Runge–Kutta method to have order p {\displaystyle p} . Some values which are known are: The provable bound above then imply that we can not find methods of orders p = 1 , 2 , … , 6 {\displaystyle p=1,2,\ldots ,6} that require fewer stages than the methods we already know for these orders. The work of Butcher also proves that 7th and 8th order methods have
893-517: Is given by where k i {\displaystyle k_{i}} are the same as for the higher-order method. Then the error is which is O ( h p ) {\displaystyle O(h^{p})} . The Butcher tableau for this kind of method is extended to give the values of b i ∗ {\displaystyle b_{i}^{*}} : The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4. Its extended Butcher tableau is: However,
940-524: Is independent of y {\displaystyle y} , so that the differential equation is equivalent to a simple integral, then RK4 is Simpson's rule . The RK4 method is a fourth-order method, meaning that the local truncation error is on the order of O ( h 5 ) {\displaystyle O(h^{5})} , while the total accumulated error is on the order of O ( h 4 ) {\displaystyle O(h^{4})} . In many practical applications
987-439: Is no explicit method with s = p + 1 {\displaystyle s=p+1} stages. Butcher also proved that for p > 7 {\displaystyle p>7} , there is no explicit Runge-Kutta method with p + 2 {\displaystyle p+2} stages. In general, however, it remains an open problem what the precise minimum number of stages s {\displaystyle s}
SECTION 20
#17327981699061034-661: Is the default method in the ode45 solver for MATLAB and GNU Octave and is the default choice for the Simulink 's model explorer solver. It is an option in Python 's SciPy ODE integration library and in Julia 's ODE solvers library. Implementations for the languages Fortran , Java , and C++ are also available. Butcher tableau In numerical analysis , the Runge–Kutta methods ( English: / ˈ r ʊ ŋ ə ˈ k ʊ t ɑː / RUUNG -ə- KUUT -tah ) are
1081-426: Is the dimension n of the system times the order k = 1 of the system, or n . The initial conditions do not affect the qualitative behavior (stable or unstable) of the system. A single k order linear equation in a single variable x is Here the number of initial conditions necessary for obtaining a closed form solution is the dimension n = 1 times the order k , or simply k . In this case
1128-416: Is the only consistent explicit Runge–Kutta method with one stage. The corresponding tableau is An example of a second-order method with two stages is provided by the explicit midpoint method : The corresponding tableau is The midpoint method is not the only second-order Runge–Kutta method with two stages; there is a family of such methods, parameterized by α and given by the formula Its Butcher tableau
1175-1122: Is with the form { g i = y m + c i h y ˙ m + h 2 ∑ j = 1 s a i j f ( g j ) , i = 1 , 2 , ⋯ , s y m + 1 = y m + h y ˙ m + h 2 ∑ j = 1 s b ¯ j f ( g j ) y ˙ m + 1 = y ˙ m + h ∑ j = 1 s b j f ( g j ) {\displaystyle {\begin{cases}g_{i}=y_{m}+c_{i}h{\dot {y}}_{m}+h^{2}\sum _{j=1}^{s}a_{ij}f(g_{j}),&i=1,2,\cdots ,s\\y_{m+1}=y_{m}+h{\dot {y}}_{m}+h^{2}\sum _{j=1}^{s}{\bar {b}}_{j}f(g_{j})\\{\dot {y}}_{m+1}={\dot {y}}_{m}+h\sum _{j=1}^{s}b_{j}f(g_{j})\end{cases}}} which forms
1222-652: The c i , i = 1 , 2 , … , s {\displaystyle c_{i},\,i=1,2,\ldots ,s} are distinct. Runge–Kutta–Nyström methods are specialized Runge–Kutta methods that are optimized for second-order differential equations. A general Runge–Kutta–Nyström method for a second order ODE system y ¨ i = f i ( y 1 , y 2 , ⋯ , y n ) {\displaystyle {\ddot {y}}_{i}=f_{i}(y_{1},y_{2},\cdots ,y_{n})} with order s {\displaystyle s}
1269-488: The characteristic values λ 1 , … , λ k , {\displaystyle \lambda _{1},\dots ,\lambda _{k},} for use in the solution equation Here the constants c 1 , … , c k {\displaystyle c_{1},\dots ,c_{k}} are found by solving a system of k different equations based on this equation, each using one of k different values of t for which
1316-679: The initial value problem . A corresponding problem exists for discrete time situations. While a closed form solution is not always possible to obtain, future values of a discrete time system can be found by iterating forward one time period per iteration, though rounding error may make this impractical over long horizons. A linear matrix difference equation of the homogeneous (having no constant term) form X t + 1 = A X t {\displaystyle X_{t+1}=AX_{t}} has closed form solution X t = A t X 0 {\displaystyle X_{t}=A^{t}X_{0}} predicated on
1363-427: The k initial pieces of information will typically not be different values of the variable x at different points in time, but rather the values of x and its first k – 1 derivatives, all at some point in time such as time zero. The initial conditions do not affect the qualitative nature of the system's behavior. The characteristic equation of this dynamic equation is λ k +
1410-412: The weights and the nodes . These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John C. Butcher ): A Taylor series expansion shows that the Runge–Kutta method is consistent if and only if There are also accompanying requirements if one requires the method to have a certain order p , meaning that the local truncation error is O( h ). These can be derived from
1457-507: The definition of the truncation error itself. For example, a two-stage method has order 2 if b 1 + b 2 = 1, b 2 c 2 = 1/2, and b 2 a 21 = 1/2. Note that a popular condition for determining coefficients is This condition alone, however, is neither sufficient, nor necessary for consistency. In general, if an explicit s {\displaystyle s} -stage Runge–Kutta method has order p {\displaystyle p} , then it can be proven that
Dormand–Prince method - Misplaced Pages Continue
1504-516: The evolution of the variables exhibits sensitive dependence on initial conditions : the iterated values of any two very nearby points on the same strange attractor , while each remaining on the attractor, will diverge from each other over time. Thus even on a single attractor the precise values of the initial conditions make a substantial difference for the future positions of the iterates. This feature makes accurate simulation of future values difficult, and impossible over long horizons, because stating
1551-1293: The following Butcher tables: c i a i j 3 + 3 6 0 0 0 3 − 3 6 2 − 3 12 0 0 3 + 3 6 0 3 6 0 b i ¯ 5 − 3 3 24 3 + 3 12 1 + 3 24 b i 3 − 2 3 12 1 2 3 + 2 3 12 {\displaystyle {\begin{array}{c|ccc}c_{i}&&a_{ij}&\\{\frac {3+{\sqrt {3}}}{6}}&0&0&0\\{\frac {3-{\sqrt {3}}}{6}}&{\frac {2-{\sqrt {3}}}{12}}&0&0\\{\frac {3+{\sqrt {3}}}{6}}&0&{\frac {\sqrt {3}}{6}}&0\\\hline {\overline {b_{i}}}&{\frac {5-3{\sqrt {3}}}{24}}&{\frac {3+{\sqrt {3}}}{12}}&{\frac {1+{\sqrt {3}}}{24}}\\\hline b_{i}&{\frac {3-2{\sqrt {3}}}{12}}&{\frac {1}{2}}&{\frac {3+2{\sqrt {3}}}{12}}\end{array}}} c i
1598-483: The function f {\displaystyle f} is independent of t {\displaystyle t} (so called autonomous system , or time-invariant system, especially in physics), and their increments are not computed at all and not passed to function f {\displaystyle f} , with only the final formula for t n + 1 {\displaystyle t_{n+1}} used. The family of explicit Runge–Kutta methods
1645-636: The initial conditions can affect whether the system diverges to infinity or whether it converges to one or another attractor of the system. Each attractor, a (possibly disconnected) region of values that some dynamic paths approach but never leave, has a (possibly disconnected) basin of attraction such that state variables with initial conditions in that basin (and nowhere else) will evolve toward that attractor. Even nearby initial conditions could be in basins of attraction of different attractors (see for example Newton's method#Basins of attraction ). Moreover, in those nonlinear systems showing chaotic behavior ,
1692-435: The initial conditions with exact precision is seldom possible and because rounding error is inevitable after even only a few iterations from an exact initial condition. Every empirical law has the disquieting quality that one does not know its limitations. We have seen that there are regularities in the events in the world around us which can be formulated in terms of mathematical concepts with an uncanny accuracy. There are, on
1739-457: The initial time t 0 {\displaystyle t_{0}} the corresponding y {\displaystyle y} value is y 0 {\displaystyle y_{0}} . The function f {\displaystyle f} and the initial conditions t 0 {\displaystyle t_{0}} , y 0 {\displaystyle y_{0}} are given. Now we pick
1786-405: The local truncation error of a single Runge–Kutta step. This is done by having two methods, one with order p {\displaystyle p} and one with order p − 1 {\displaystyle p-1} . These methods are interwoven, i.e., they have common intermediate steps. Thanks to this, estimating the error has little or negligible computational cost compared to
1833-515: The number of stages must satisfy s ≥ p {\displaystyle s\geq p} , and if p ≥ 5 {\displaystyle p\geq 5} , then s ≥ p + 1 {\displaystyle s\geq p+1} . However, it is not known whether these bounds are sharp in all cases. In some cases, it is proven that the bound cannot be achieved. For instance, Butcher proved that for p > 6 {\displaystyle p>6} , there
1880-405: The number of time lags in the system. The initial conditions in this linear system do not affect the qualitative nature of the future behavior of the state variable X ; that behavior is stable or unstable based on the eigenvalues of the matrix A but not based on the initial conditions. Alternatively, a dynamic process in a single variable x having multiple time lags is Here the dimension
1927-568: The simplest adaptive Runge–Kutta method involves combining Heun's method , which is order 2, with the Euler method , which is order 1. Its extended Butcher tableau is: Other adaptive Runge–Kutta methods are the Bogacki–Shampine method (orders 3 and 2), the Cash–Karp method and the Dormand–Prince method (both with orders 5 and 4). A Runge–Kutta method is said to be nonconfluent if all
Dormand–Prince method - Misplaced Pages Continue
1974-480: The solution equation This equation and its first k – 1 derivatives form a system of k equations that can be solved for the k parameters c 1 , … , c k , {\displaystyle c_{1},\dots ,c_{k},} given the known initial conditions on x and its k – 1 derivatives' values at some time t . Nonlinear systems can exhibit a substantially richer variety of behavior than linear systems can. In particular,
2021-431: The specific initial condition x t {\displaystyle x_{t}} Is known. A differential equation system of the first order with n variables stacked in a vector X is Its behavior through time can be traced with a closed form solution conditional on an initial condition vector X 0 {\displaystyle X_{0}} . The number of required initial pieces of information
2068-399: The system's variables forward through time. In both differential equations in continuous time and difference equations in discrete time, initial conditions affect the value of the dynamic variables ( state variables ) at any future time. In continuous time, the problem of finding a closed form solution for the state variables as a function of time and of the initial conditions is called
2115-435: The two-stage second-order Runge–Kutta method with α = 2/3, also known as Ralston method . It is given by the tableau with the corresponding equations This method is used to solve the initial-value problem with step size h = 0.025, so the method needs to take four steps. The method proceeds as follows: The numerical solutions correspond to the underlined values. Adaptive methods are designed to produce an estimate of
2162-407: The vector X 0 {\displaystyle X_{0}} of initial conditions on the individual variables that are stacked into the vector; X 0 {\displaystyle X_{0}} is called the vector of initial conditions or simply the initial condition, and contains nk pieces of information, n being the dimension of the vector X and k = 1 being
2209-479: Was constructed so that the fourth-order solution has a small error. For this reason, the Dormand–Prince method is more suitable when the higher-order solution is used to continue the integration, a practice known as local extrapolation . The Butcher tableau is: The first row of b coefficients gives the fifth-order accurate solution, and the second row gives the fourth-order accurate solution. Dormand–Prince
#905094