Misplaced Pages

Cond

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In numerical analysis , the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: given f ( x ) = y , {\displaystyle f(x)=y,} one is solving for x, and thus the condition number of the (local) inverse must be used.

#228771

43-484: Cond may refer to: Condition number , in numerical analysis cond , a conditional expression in LISP Cond, a variant spelling of conn , to control a ship's movements at sea. Topics referred to by the same term [REDACTED] This disambiguation page lists articles associated with the title Cond . If an internal link led you here, you may wish to change

86-560: A temperature measurement given in Celsius scale is 1 °C, and the true value is 2 °C, the relative error is 0.5. But if the exact same approximation is made with the Kelvin scale , a 1 K absolute error with the same true value of 275.15 K = 2 °C gives a relative error of 3.63 × 10 . Statements about relative errors are sensitive to addition of constants, but not to multiplication by constants. For absolute errors ,

129-410: A computing machine precision or measurement error (e.g. the length of a piece of paper is 4.53 cm but the ruler only allows you to estimate it to the nearest 0.1 cm, so you measure it as 4.5 cm). In the mathematical field of numerical analysis , the numerical stability of an algorithm indicates the extent to which errors in the input of the algorithm will lead to large errors of

172-575: A point x {\displaystyle x} (specifically, its relative condition number ) is then defined to be the maximum ratio of the fractional change in f ( x ) {\displaystyle f(x)} to any fractional change in x {\displaystyle x} , in the limit where the change δ x {\displaystyle \delta x} in x {\displaystyle x} becomes infinitesimally small: where ‖ ⋅ ‖ {\displaystyle \|\cdot \|}

215-470: A point x {\displaystyle x} , this is Note that this is the absolute value of the elasticity of a function in economics. Most elegantly, this can be understood as (the absolute value of) the ratio of the logarithmic derivative of f {\displaystyle f} , which is ( log ⁡ f ) ′ = f ′ / f {\displaystyle (\log f)'=f'/f} , and

258-500: A ratio with zero in the denominator, hence infinite relative change. More directly, given a small change Δ x {\displaystyle \Delta x} in x {\displaystyle x} , the relative change in x {\displaystyle x} is [ ( x + Δ x ) − x ] / x = ( Δ x ) / x {\displaystyle [(x+\Delta x)-x]/x=(\Delta x)/x} , while

301-570: A rational number r 2 that satisfies | v - r 2 | ≤ ε|v | / (2 r 1 ) ≤ ε , so it has absolute error ε as wished. The reverse implication is usually not true. But, if we assume that some positive lower bound on |v| can be computed in polynomial time, e.g. | v | > b > 0, and v is polynomially computable with absolute error (by some algorithm called ABS), then it is also polynomially computable with relative error, since we can simply call ABS with absolute error ε = η b. An algorithm that, for every rational number η >0, computes

344-434: A rational number v approx that approximates v with relative error η , in time polynomial in the size of the input and 1/ η (rather than log(1/ η )), is called an FPTAS . In most indicating instruments, the accuracy is guaranteed to a certain percentage of full-scale reading. The limits of these deviations from the specified values are known as limiting errors or guarantee errors. The definitions can be extended to

387-400: Is O(log(1/ ε )). Analogously, v is polynomially computable with relative error if, for every rational number η >0, it is possible to compute a rational number v approx that approximates v with relative error η , in time polynomial in the size of the input and the encoding size of η . If v is polynomially computable with relative error (by some algorithm called REL), then it

430-565: Is a norm on the domain/codomain of f {\displaystyle f} . If f {\displaystyle f} is differentiable, this is equivalent to: where ⁠ J ( x ) {\displaystyle J(x)} ⁠ denotes the Jacobian matrix of partial derivatives of f {\displaystyle f} at x {\displaystyle x} , and ‖ J ( x ) ‖ {\displaystyle \|J(x)\|}

473-532: Is a large change in the answer or dependent variable . This means that the correct solution/answer to the equation becomes hard to find. The condition number is a property of the problem. Paired with the problem are any number of algorithms that can be used to solve the problem, that is, to calculate the solution. Some algorithms have a property called backward stability ; in general, a backward stable algorithm can be expected to accurately solve well-conditioned problems. Numerical analysis textbooks give formulas for

SECTION 10

#1732765947229

516-416: Is a property of the matrix , not the algorithm or floating-point accuracy of the computer used to solve the corresponding system. In particular, one should think of the condition number as being (very roughly) the rate at which the solution x will change with respect to a change in b . Thus, if the condition number is large, even a small error in b may cause a large error in x . On the other hand, if

559-450: Is also polynomially computable with absolute error. Proof . Let ε >0 be the desired absolute error. First, use REL with relative error η= 1/2; find a rational number r 1 such that | v - r 1 | ≤ | v |/2, and hence |v| ≤ 2 | r 1 |. If r 1 =0, then v =0 and we are done. Since REL is polynomial, the encoding length of r 1 is polynomial in the input. Now, run REL again with relative error η=ε/ (2 |r 1 |). This yields

602-536: Is given a name, the condition number of a matrix . If ‖ ⋅ ‖ {\displaystyle \|\cdot \|} is the matrix norm induced by the L ∞ {\displaystyle L^{\infty }} (vector) norm and A {\displaystyle A} is lower triangular non-singular (i.e. a i i ≠ 0 {\displaystyle a_{ii}\neq 0} for all i {\displaystyle i} ), then recalling that

645-535: Is not invertible is often said to have a condition number equal to infinity. Alternatively, it can be defined as κ ( A ) = ‖ A ‖ ‖ A † ‖ {\displaystyle \kappa (A)=\|A\|\|A^{\dagger }\|} , where A † {\displaystyle A^{\dagger }} is the Moore-Penrose pseudoinverse . For square matrices, this unfortunately makes

688-417: Is only 0.000003. There are two features of relative error that should be kept in mind. First, relative error is undefined when the true value is zero as it appears in the denominator (see below). Second, relative error only makes sense when measured on a ratio scale , (i.e. a scale which has a true meaningful zero), otherwise it is sensitive to the measurement units. For example, when an absolute error in

731-416: Is the induced norm on the matrix. Absolute error The approximation error in a data value is the discrepancy between an exact value and some approximation to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute error divided by the data value). An approximation error can occur for a variety of reasons, among them

774-415: Is the infinitesimal rate of relative change in a function: it is the derivative f ′ {\displaystyle f'} scaled by the value of f {\displaystyle f} . Note that if a function has a zero at a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to positive or negative, yielding

817-665: Is the matrix norm induced by the (vector) Euclidean norm (sometimes known as the L norm and typically denoted as ‖ ⋅ ‖ 2 {\displaystyle \|\cdot \|_{2}} ), then where σ max ( A ) {\displaystyle \sigma _{\text{max}}(A)} and σ min ( A ) {\displaystyle \sigma _{\text{min}}(A)} are maximal and minimal singular values of A {\displaystyle A} respectively. Hence: The condition number with respect to L arises so often in numerical linear algebra that it

860-438: Is thus computed from the geometry of the matrix. More generally, condition numbers can be defined for non-linear functions in several variables. A problem with a low condition number is said to be well-conditioned , while a problem with a high condition number is said to be ill-conditioned . In non-mathematical terms, an ill-conditioned problem is one where, for a small change in the inputs (the independent variables ) there

903-443: The relative error is ‖ δ f ( x ) ‖ / ‖ f ( x ) ‖ = ‖ f ( x ) − f ~ ( x ) ‖ / ‖ f ( x ) ‖ . {\displaystyle \|\delta f(x)\|/\|f(x)\|=\left\|f(x)-{\tilde {f}}(x)\right\|/\|f(x)\|.} In this context,

SECTION 20

#1732765947229

946-404: The absolute condition number of a problem f {\displaystyle f} is and the relative condition number is For example, the condition number associated with the linear equation Ax  =  b gives a bound on how inaccurate the solution x will be after approximation. Note that this is before the effects of round-off error are taken into account; conditioning

989-505: The error is δ f ( x ) := f ( x ) − f ~ ( x ) , {\displaystyle \delta f(x):=f(x)-{\tilde {f}}(x),} the absolute error is ‖ δ f ( x ) ‖ = ‖ f ( x ) − f ~ ( x ) ‖ {\displaystyle \|\delta f(x)\|=\left\|f(x)-{\tilde {f}}(x)\right\|} and

1032-525: The algorithm introduces no errors of its own) an approximation of the solution whose precision is no worse than that of the data. However, it does not mean that the algorithm will converge rapidly to this solution, just that it will not diverge arbitrarily because of inaccuracy on the source data (backward error), provided that the forward error introduced by the algorithm does not diverge as well because of accumulating intermediate rounding errors. The condition number may also be infinite, but this implies that

1075-408: The condition number discontinuous, but it is a useful definition for rectangular matrices, which are never invertible but are still used to define systems of equations. Condition numbers can also be defined for nonlinear functions, and can be computed using calculus . The condition number varies with the point; in some cases one can use the maximum (or supremum ) condition number over the domain of

1118-570: The condition number does not give the exact value of the maximum inaccuracy that may occur in the algorithm. It generally just bounds it with an estimate (whose computed value depends on the choice of the norm to measure the inaccuracy). Given a problem f {\displaystyle f} and an algorithm f ~ {\displaystyle {\tilde {f}}} with an input x {\displaystyle x} and output f ~ ( x ) , {\displaystyle {\tilde {f}}(x),}

1161-425: The condition number is not significantly larger than one, the matrix is well-conditioned , which means that its inverse can be computed with good accuracy. If the condition number is very large, then the matrix is said to be ill-conditioned . Practically, such a matrix is almost singular, and the computation of its inverse, or solution of a linear system of equations is prone to large numerical errors. A matrix that

1204-406: The condition number is small, then the error in x will not be much bigger than the error in b . The condition number is defined more precisely to be the maximum ratio of the relative error in x to the relative error in b . Let e be the error in b . Assuming that A is a nonsingular matrix, the error in the solution A b is A e . The ratio of the relative error in the solution to

1247-438: The condition numbers of problems and identify known backward stable algorithms. As a rule of thumb, if the condition number κ ( A ) = 10 k {\displaystyle \kappa (A)=10^{k}} , then you may lose up to k {\displaystyle k} digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods. However,

1290-656: The derivative. Condition numbers of common elementary functions are particularly important in computing significant figures and can be computed immediately from the derivative. A few important ones are given below: Condition numbers can be defined for any function f {\displaystyle f} mapping its data from some domain (e.g. an m {\displaystyle m} -tuple of real numbers x {\displaystyle x} ) into some codomain (e.g. an n {\displaystyle n} -tuple of real numbers f ( x ) {\displaystyle f(x)} ), where both

1333-400: The domain and codomain are Banach spaces . They express how sensitive that function is to small changes (or small errors) in its arguments. This is crucial in assessing the sensitivity and potential accuracy difficulties of numerous computational problems, for example, polynomial root finding or computing eigenvalues . The condition number of f {\displaystyle f} at

Cond - Misplaced Pages Continue

1376-544: The eigenvalues of any triangular matrix are simply the diagonal entries. The condition number computed with this norm is generally larger than the condition number computed relative to the Euclidean norm , but it can be evaluated more easily (and this is often the only practicably computable condition number, when the problem to solve involves a non-linear algebra , for example when approximating irrational and transcendental functions or numbers with numerical methods). If

1419-418: The function or domain of the question as an overall condition number, while in other cases the condition number at a particular point is of more interest. The condition number of a differentiable function f {\displaystyle f} in one variable as a function is | x f ′ / f | {\displaystyle \left|xf'/f\right|} . Evaluated at

1462-399: The link to point directly to the intended article. Retrieved from " https://en.wikipedia.org/w/index.php?title=Cond&oldid=1238709979 " Category : Disambiguation pages Hidden categories: Short description is different from Wikidata All article disambiguation pages All disambiguation pages Condition number The condition number is derived from

1505-408: The logarithmic derivative of x {\displaystyle x} , which is ( log ⁡ x ) ′ = x ′ / x = 1 / x {\displaystyle (\log x)'=x'/x=1/x} , yielding a ratio of x f ′ / f {\displaystyle xf'/f} . This is because the logarithmic derivative

1548-422: The opposite is true: are sensitive to multiplication by constants, but not to addition of constants. We say that a real value v is polynomially computable with absolute error from an input if, for every rational number ε >0, it is possible to compute a rational number v approx that approximates v with absolute error ε , in time polynomial in the size of the input and the encoding size of ε (which

1591-610: The output; numerically stable algorithms do not yield a significant error in output when the input is malformed and vice versa. Given some value v , we say that v approx approximates v with absolute error ε >0 if where the vertical bars denote the absolute value . We say that v approx approximates v with relative error η >0 if | v − v approx | ≤ η ⋅ v {\displaystyle |v-v_{\text{approx}}|\leq \eta \cdot v} . If v ≠ 0, then The percent error (an expression of

1634-411: The percent error in that particular situation is, rounded, 16.7%. The relative error is often used to compare approximations of numbers of widely differing size; for example, approximating the number 1,000 with an absolute error of 3 is, in most applications, much worse than approximating the number 1,000,000 with an absolute error of 3; in the first case the relative error is 0.003 while in the second it

1677-404: The problem is ill-posed (does not possess a unique, well-defined solution for each choice of data; that is, the matrix is not invertible ), and no algorithm can be expected to reliably find a solution. The definition of the condition number depends on the choice of norm , as can be illustrated by two examples. If ‖ ⋅ ‖ {\displaystyle \|\cdot \|}

1720-403: The relative change in f ( x ) {\displaystyle f(x)} is [ f ( x + Δ x ) − f ( x ) ] / f ( x ) {\displaystyle [f(x+\Delta x)-f(x)]/f(x)} . Taking the ratio yields The last term is the difference quotient (the slope of the secant line ), and taking the limit yields

1763-406: The relative error in b is The maximum value (for nonzero b and e ) is then seen to be the product of the two operator norms as follows: The same definition is used for any consistent norm , i.e. one that satisfies When the condition number is exactly one (which can only happen if A is a scalar multiple of a linear isometry ), then a solution algorithm can find (in principle, meaning if

Cond - Misplaced Pages Continue

1806-413: The relative error) is An error bound is an upper limit on the relative or absolute size of an approximation error. As an example, if the exact value is 50 and the approximation is 49.9, then the absolute error is 0.1 and the relative error is 0.1/50 = 0.002 = 0.2%. As a practical example, when measuring a 6 mL beaker, the value read was 5 mL. The correct reading being 6 mL, this means

1849-444: The theory of propagation of uncertainty , and is formally defined as the value of the asymptotic worst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem. The condition number is frequently applied to questions in linear algebra , in which case the derivative is straightforward but the error could be in many different directions, and

#228771