Misplaced Pages

Convolution

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In mathematics (in particular, functional analysis ), convolution is a mathematical operation on two functions ( f {\displaystyle f} and g {\displaystyle g} ) that produces a third function ( f ∗ g {\displaystyle f*g} ). The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result (see commutativity ). Graphically, it expresses how the 'shape' of one function is modified by the other.

#43956

108-458: Some features of convolution are similar to cross-correlation : for real-valued functions, of a continuous or discrete variable, convolution ( f ∗ g {\displaystyle f*g} ) differs from cross-correlation ( f ⋆ g {\displaystyle f\star g} ) only in that either f ( x ) {\displaystyle f(x)} or g ( x ) {\displaystyle g(x)}

216-404: A cubic function with a local maximum and a local minimum has three branches (see the adjacent picture). These considerations are particularly important for defining the inverses of trigonometric functions . For example, the sine function is not one-to-one, since for every real x (and more generally sin( x + 2 π n ) = sin( x ) for every integer n ). However, the sine is one-to-one on

324-847: A discrete-time process or a real number for a continuous-time process). Then X t {\displaystyle X_{t}} is the value (or realization ) produced by a given run of the process at time t {\displaystyle t} . Suppose that the process has means μ X ( t ) {\displaystyle \mu _{X}(t)} and μ Y ( t ) {\displaystyle \mu _{Y}(t)} and variances σ X 2 ( t ) {\displaystyle \sigma _{X}^{2}(t)} and σ Y 2 ( t ) {\displaystyle \sigma _{Y}^{2}(t)} at time t {\displaystyle t} , for each t {\displaystyle t} . Then

432-422: A finite impulse response ), a finite summation may be used: When a function g N {\displaystyle g_{_{N}}} is periodic, with period N , {\displaystyle N,} then for functions, f , {\displaystyle f,} such that f ∗ g N {\displaystyle f*g_{_{N}}} exists,

540-602: A ] ). The convolution of f and g exists if f and g are both Lebesgue integrable functions in L ( R ) , and in this case f ∗ g is also integrable ( Stein & Weiss 1971 , Theorem 1.3). This is a consequence of Tonelli's theorem . This is also true for functions in L , under the discrete convolution, or more generally for the convolution on any group . Likewise, if f ∈ L ( R )  and   g ∈ L ( R )  where 1 ≤ p ≤ ∞ ,  then   f * g ∈ L ( R ),  and Cross-correlation In signal processing , cross-correlation

648-472: A derivation of convolution as the result of LTI constraints. In terms of the Fourier transforms of the input and output of an LTI operation, no new frequency components are created. The existing ones are only modified (amplitude and/or phase). In other words, the output transform is the pointwise product of the input transform with a third transform (known as a transfer function ). See Convolution theorem for

756-403: A derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms. The resulting waveform (not shown here) is the convolution of functions f {\displaystyle f} and g {\displaystyle g} . If f ( t ) {\displaystyle f(t)}

864-633: A feature in f {\displaystyle f} at t {\displaystyle t} also occurs later in g {\displaystyle g} at t + τ {\displaystyle t+\tau } , hence g {\displaystyle g} could be described to lag f {\displaystyle f} by τ {\displaystyle \tau } . If f {\displaystyle f} and g {\displaystyle g} are both continuous periodic functions of period T {\displaystyle T} ,

972-406: A function f : X → X with itself is called iteration . If f is applied n times, starting with the value x , then this is written as f ( x ) ; so f ( x ) = f ( f ( x )) , etc. Since f ( f ( x )) = x , composing f and f yields f , "undoing" the effect of one application of f . While the notation f ( x ) might be misunderstood, ( f ( x )) certainly denotes

1080-519: A neighborhood of a point p as long as the Jacobian matrix of f at p is invertible . In this case, the Jacobian of f at f ( p ) is the matrix inverse of the Jacobian of f at p . Even if a function f is not one-to-one, it may be possible to define a partial inverse of f by restricting the domain. For example, the function is not one-to-one, since x = (− x ) . However,

1188-782: A positive scalar. Normalized correlation is one of the methods used for template matching , a process used for finding instances of a pattern or object within an image. It is also the 2-dimensional version of Pearson product-moment correlation coefficient . NCC is similar to ZNCC with the only difference of not subtracting the local mean value of intensities: 1 n ∑ x , y 1 σ f σ t f ( x , y ) t ( x , y ) {\displaystyle {\frac {1}{n}}\sum _{x,y}{\frac {1}{\sigma _{f}\sigma _{t}}}f(x,y)t(x,y)} Caution must be applied when using cross correlation for nonlinear systems. In certain circumstances, which depend on

SECTION 10

#1732791987044

1296-503: Is 1 n ∑ x , y 1 σ f σ t ( f ( x , y ) − μ f ) ( t ( x , y ) − μ t ) {\displaystyle {\frac {1}{n}}\sum _{x,y}{\frac {1}{\sigma _{f}\sigma _{t}}}\left(f(x,y)-\mu _{f}\right)\left(t(x,y)-\mu _{t}\right)} where n {\displaystyle n}

1404-460: Is injective , and the condition f ( g ( y ) ) = y {\displaystyle f(g(y))=y} for all y ∈ Y {\displaystyle y\in Y} implies that f is surjective . The inverse function f to f can be explicitly described as the function Recall that if f is an invertible function with domain X and codomain Y , then Using

1512-527: Is standard deviation of f {\displaystyle f} . In functional analysis terms, this can be thought of as the dot product of two normalized vectors . That is, if F ( x , y ) = f ( x , y ) − μ f {\displaystyle F(x,y)=f(x,y)-\mu _{f}} and T ( x , y ) = t ( x , y ) − μ t {\displaystyle T(x,y)=t(x,y)-\mu _{t}} then

1620-733: Is a 3 × 2 {\displaystyle 3\times 2} matrix whose ( i , j ) {\displaystyle (i,j)} -th entry is E ⁡ [ X i Y j ] {\displaystyle \operatorname {E} [X_{i}Y_{j}]} . If Z = ( Z 1 , … , Z m ) {\displaystyle \mathbf {Z} =(Z_{1},\ldots ,Z_{m})} and W = ( W 1 , … , W n ) {\displaystyle \mathbf {W} =(W_{1},\ldots ,W_{n})} are complex random vectors , each containing random variables whose expected value and variance exist,

1728-432: Is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product . It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition , single particle analysis , electron tomography , averaging , cryptanalysis , and neurophysiology . The cross-correlation

1836-548: Is a unit impulse , the result of this process is simply g ( t ) {\displaystyle g(t)} . Formally: One of the earliest uses of the convolution integral appeared in D'Alembert 's derivation of Taylor's theorem in Recherches sur différents points importants du système du monde, published in 1754. Also, an expression of the type: is used by Sylvestre François Lacroix on page 505 of his book entitled Treatise on differences and series , which

1944-419: Is a consequence of the implication that for f to be invertible it must be bijective. The involutory nature of the inverse can be concisely expressed by The inverse of a composition of functions is given by Notice that the order of g and f have been reversed; to undo f followed by g , we must first undo g , and then undo f . For example, let f ( x ) = 3 x and let g ( x ) = x + 5 . Then

2052-526: Is a scalar random variable which is realized repeatedly in a time series , then the correlations of the various temporal instances of X {\displaystyle \mathbf {X} } are known as autocorrelations of X {\displaystyle \mathbf {X} } , and the cross-correlations of X {\displaystyle \mathbf {X} } with Y {\displaystyle \mathbf {Y} } across time are temporal cross-correlations. In probability and statistics,

2160-490: Is a vector of kernel functions k ( ⋅ , ⋅ ) : C M × C M → R {\displaystyle k(\cdot ,\cdot )\colon \mathbb {C} ^{M}\times \mathbb {C} ^{M}\to \mathbb {R} } and T i ( ⋅ ) : C M → C M {\displaystyle T_{i}(\cdot )\colon \mathbb {C} ^{M}\to \mathbb {C} ^{M}}

2268-433: Is also compactly supported and continuous ( Hörmander 1983 , Chapter 1). More generally, if either function (say f ) is compactly supported and the other is locally integrable , then the convolution f ∗ g is well-defined and continuous. Convolution of f and g is also well defined when both functions are locally square integrable on R and supported on an interval of the form [ a , +∞) (or both supported on [−∞,

SECTION 20

#1732791987044

2376-662: Is an affine transform . Specifically, T i ( ⋅ ) {\displaystyle T_{i}(\cdot )} can be circular translation transform, rotation transform, or scale transform, etc. The kernel cross-correlation extends cross-correlation from linear space to kernel space. Cross-correlation is equivariant to translation; kernel cross-correlation is equivariant to any affine transforms, including translation, rotation, and scale, etc. As an example, consider two real valued functions f {\displaystyle f} and g {\displaystyle g} differing only by an unknown shift along

2484-723: Is called the (positive) square root function and is denoted by x ↦ x {\displaystyle x\mapsto {\sqrt {x}}} . The following table shows several standard functions and their inverses: Many functions given by algebraic formulas possess a formula for their inverse. This is because the inverse f − 1 {\displaystyle f^{-1}} of an invertible function f : R → R {\displaystyle f\colon \mathbb {R} \to \mathbb {R} } has an explicit description as This allows one to easily determine inverses of many functions that are given by algebraic formulas. For example, if f

2592-403: Is either strictly increasing or decreasing (with no local maxima or minima ). For example, the function is invertible, since the derivative f′ ( x ) = 3 x + 1 is always positive. If the function f is differentiable on an interval I and f′ ( x ) ≠ 0 for each x ∈ I , then the inverse f is differentiable on f ( I ) . If y = f ( x ) , the derivative of

2700-560: Is equal to g ( − τ ) {\displaystyle g(-\tau )} that slides or is shifted toward the left (toward − ∞ {\displaystyle -\infty } ) by the amount of | t | {\displaystyle |t|} . For functions f {\displaystyle f} , g {\displaystyle g} supported on only [ 0 , ∞ ) {\displaystyle [0,\infty )} (i.e., zero for negative arguments),

2808-400: Is equal to id X . Such a function is called an involution . If f is invertible, then the graph of the function is the same as the graph of the equation This is identical to the equation y = f ( x ) that defines the graph of f , except that the roles of x and y have been reversed. Thus the graph of f can be obtained from the graph of f by switching the positions of

2916-737: Is equivalent to ( f ∗ g ) ( t − t 0 ) {\displaystyle (f*g)(t-t_{0})} , but f ( t − t 0 ) ∗ g ( t − t 0 ) {\displaystyle f(t-t_{0})*g(t-t_{0})} is in fact equivalent to ( f ∗ g ) ( t − 2 t 0 ) {\displaystyle (f*g)(t-2t_{0})} . Given two functions f ( t ) {\displaystyle f(t)} and g ( t ) {\displaystyle g(t)} with bilateral Laplace transforms (two-sided Laplace transform) and respectively,

3024-427: Is equivalent to ( f ⋆ g ) ( τ )   ≜ ∫ t 0 t 0 + T f ( t − τ ) ¯ g ( t ) d t {\displaystyle (f\star g)(\tau )\ \triangleq \int _{t_{0}}^{t_{0}+T}{\overline {f(t-\tau )}}g(t)\,dt} Similarly, for discrete functions,

3132-505: Is equivalent to ( f ⋆ g ) ( τ )   ≜ ∫ − ∞ ∞ f ( t − τ ) ¯ g ( t ) d t {\displaystyle (f\star g)(\tau )\ \triangleq \int _{-\infty }^{\infty }{\overline {f(t-\tau )}}g(t)\,dt} where f ( t ) ¯ {\displaystyle {\overline {f(t)}}} denotes

3240-636: Is equivalent to: ( f ⋆ g ) [ n ]   ≜ ∑ m = 0 N − 1 f [ ( m − n ) mod   N ] ¯ g [ m ] {\displaystyle (f\star g)[n]\ \triangleq \sum _{m=0}^{N-1}{\overline {f[(m-n)_{{\text{mod}}~N}]}}g[m]} For finite discrete functions f ∈ C N {\displaystyle f\in \mathbb {C} ^{N}} , g ∈ C M {\displaystyle g\in \mathbb {C} ^{M}} ,

3348-1091: Is important both because the interpretation of the autocorrelation as a correlation provides a scale-free measure of the strength of statistical dependence , and because the normalization has an effect on the statistical properties of the estimated autocorrelations. For jointly wide-sense stationary stochastic processes, the cross-correlation function has the following symmetry property: R X Y ⁡ ( t 1 , t 2 ) = R Y X ⁡ ( t 2 , t 1 ) ¯ {\displaystyle \operatorname {R} _{XY}(t_{1},t_{2})={\overline {\operatorname {R} _{YX}(t_{2},t_{1})}}} Respectively for jointly WSS processes: R X Y ⁡ ( τ ) = R Y X ⁡ ( − τ ) ¯ {\displaystyle \operatorname {R} _{XY}(\tau )={\overline {\operatorname {R} _{YX}(-\tau )}}} Cross-correlations are useful for determining

Convolution - Misplaced Pages Continue

3456-563: Is invertible, then there is exactly one function g satisfying this property. The function g is called the inverse of f , and is usually denoted as f , a notation introduced by John Frederick William Herschel in 1813. The function f is invertible if and only if it is bijective. This is because the condition g ( f ( x ) ) = x {\displaystyle g(f(x))=x} for all x ∈ X {\displaystyle x\in X} implies that f

3564-511: Is itself a complex-valued function on R , defined by: and is well-defined only if f and g decay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up in g at infinity can be easily offset by sufficiently rapid decay in f . The question of existence thus may involve different conditions on f and g : If f and g are compactly supported continuous functions , then their convolution exists, and

3672-586: Is known as a circular or cyclic convolution of f {\displaystyle f} and g {\displaystyle g} . And if the periodic summation above is replaced by f T {\displaystyle f_{T}} , the operation is called a periodic convolution of f T {\displaystyle f_{T}} and g T {\displaystyle g_{T}} . For complex-valued functions f {\displaystyle f} and g {\displaystyle g} defined on

3780-684: Is known as a circular convolution of f {\displaystyle f} and g . {\displaystyle g.} When the non-zero durations of both f {\displaystyle f} and g {\displaystyle g} are limited to the interval [ 0 , N − 1 ] , {\displaystyle [0,N-1],}   f ∗ g N {\displaystyle f*g_{_{N}}} reduces to these common forms : The notation f ∗ N g {\displaystyle f*_{N}g} for cyclic convolution denotes convolution over

3888-2206: Is not well-defined for all time series or processes, because the mean or variance may not exist. Let ( X t , Y t ) {\displaystyle (X_{t},Y_{t})} represent a pair of stochastic processes that are jointly wide-sense stationary . Then the cross-covariance function and the cross-correlation function are given as follows. R X Y ⁡ ( τ ) ≜   E ⁡ [ X t Y t + τ ¯ ] {\displaystyle \operatorname {R} _{XY}(\tau )\triangleq \ \operatorname {E} \left[X_{t}{\overline {Y_{t+\tau }}}\right]} or equivalently R X Y ⁡ ( τ ) = E ⁡ [ X t − τ Y t ¯ ] {\displaystyle \operatorname {R} _{XY}(\tau )=\operatorname {E} \left[X_{t-\tau }{\overline {Y_{t}}}\right]} K X Y ⁡ ( τ ) ≜   E ⁡ [ ( X t − μ X ) ( Y t + τ − μ Y ) ¯ ] {\displaystyle \operatorname {K} _{XY}(\tau )\triangleq \ \operatorname {E} \left[\left(X_{t}-\mu _{X}\right){\overline {\left(Y_{t+\tau }-\mu _{Y}\right)}}\right]} or equivalently K X Y ⁡ ( τ ) = E ⁡ [ ( X t − τ − μ X ) ( Y t − μ Y ) ¯ ] {\displaystyle \operatorname {K} _{XY}(\tau )=\operatorname {E} \left[\left(X_{t-\tau }-\mu _{X}\right){\overline {\left(Y_{t}-\mu _{Y}\right)}}\right]} where μ X {\displaystyle \mu _{X}} and σ X {\displaystyle \sigma _{X}} are

3996-411: Is reflected about the y-axis in convolution; thus it is a cross-correlation of g ( − x ) {\displaystyle g(-x)} and f ( x ) {\displaystyle f(x)} , or f ( − x ) {\displaystyle f(-x)} and g ( x ) {\displaystyle g(x)} . For complex-valued functions,

4104-500: Is similar in nature to the convolution of two functions. In an autocorrelation , which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy. In probability and statistics , the term cross-correlations refers to the correlations between the entries of two random vectors X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } , while

4212-658: Is the L ² norm . Cauchy–Schwarz then implies that ZNCC has a range of [ − 1 , 1 ] {\displaystyle [-1,1]} . Thus, if f {\displaystyle f} and t {\displaystyle t} are real matrices, their normalized cross-correlation equals the cosine of the angle between the unit vectors F {\displaystyle F} and T {\displaystyle T} , being thus 1 {\displaystyle 1} if and only if F {\displaystyle F} equals T {\displaystyle T} multiplied by

4320-917: Is the expected value operator. Note that this expression may be not defined. Subtracting the mean before multiplication yields the cross-covariance between times t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} : K X Y ⁡ ( t 1 , t 2 ) ≜   E ⁡ [ ( X t 1 − μ X ( t 1 ) ) ( Y t 2 − μ Y ( t 2 ) ) ¯ ] {\displaystyle \operatorname {K} _{XY}(t_{1},t_{2})\triangleq \ \operatorname {E} \left[\left(X_{t_{1}}-\mu _{X}(t_{1})\right){\overline {(Y_{t_{2}}-\mu _{Y}(t_{2}))}}\right]} Note that this expression

4428-421: Is the bilateral Laplace transform of ( f ∗ g ) ( t ) {\displaystyle (f*g)(t)} . A similar derivation can be done using the unilateral Laplace transform (one-sided Laplace transform). The convolution operation also describes the output (in terms of the input) of an important class of operations known as linear time-invariant (LTI). See LTI system theory for

Convolution - Misplaced Pages Continue

4536-415: Is the function then f is a bijection, and therefore possesses an inverse function f . The formula for this inverse has an expression as an infinite sum: Since a function is a special type of binary relation , many of the properties of an inverse function correspond to properties of converse relations . If an inverse function exists for a given function f , then it is unique. This follows since

4644-419: Is the function then to determine f − 1 ( y ) {\displaystyle f^{-1}(y)} for a real number y , one must find the unique real number x such that (2 x + 8) = y . This equation can be solved: Thus the inverse function f is given by the formula Sometimes, the inverse of a function cannot be expressed by a closed-form formula . For example, if f

4752-622: Is the last of 3 volumes of the encyclopedic series: Traité du calcul différentiel et du calcul intégral , Chez Courcier, Paris, 1797–1800. Soon thereafter, convolution operations appear in the works of Pierre Simon Laplace , Jean-Baptiste Joseph Fourier , Siméon Denis Poisson , and others. The term itself did not come into wide use until the 1950s or 1960s. Prior to that it was sometimes known as Faltung (which means folding in German ), composition product , superposition integral , and Carson 's integral . Yet it appears as early as 1903, though

4860-393: Is the number of pixels in t ( x , y ) {\displaystyle t(x,y)} and f ( x , y ) {\displaystyle f(x,y)} , μ f {\displaystyle \mu _{f}} is the average of f {\displaystyle f} and σ f {\displaystyle \sigma _{f}}

4968-453: Is the set Y . Then f is invertible if there exists a function g from Y to X such that g ( f ( x ) ) = x {\displaystyle g(f(x))=x} for all x ∈ X {\displaystyle x\in X} and f ( g ( y ) ) = y {\displaystyle f(g(y))=y} for all y ∈ Y {\displaystyle y\in Y} . If f

5076-391: The τ {\displaystyle \tau } -axis toward the right (toward + ∞ {\displaystyle +\infty } ) by the amount of t {\displaystyle t} , while if t {\displaystyle t} is a negative value, then g ( t − τ ) {\displaystyle g(t-\tau )}

5184-420: The complex conjugate of f ( t ) {\displaystyle f(t)} , and τ {\displaystyle \tau } is called displacement or lag. For highly-correlated f {\displaystyle f} and g {\displaystyle g} which have a maximum cross-correlation at a particular τ {\displaystyle \tau } ,

5292-416: The composition of functions , this statement can be rewritten to the following equations between functions: where id X is the identity function on the set X ; that is, the function that leaves its argument unchanged. In category theory , this statement is used as the definition of an inverse morphism . Considering function composition helps to understand the notation f . Repeatedly composing

5400-702: The conjugate of f {\displaystyle f} ensures that aligned peaks (or aligned troughs) with imaginary components will contribute positively to the integral. In econometrics , lagged cross-correlation is sometimes referred to as cross-autocorrelation. For random vectors X = ( X 1 , … , X m ) {\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{m})} and Y = ( Y 1 , … , Y n ) {\displaystyle \mathbf {Y} =(Y_{1},\ldots ,Y_{n})} , each containing random elements whose expected value and variance exist,

5508-444: The correlations of a random vector X {\displaystyle \mathbf {X} } are the correlations between the entries of X {\displaystyle \mathbf {X} } itself, those forming the correlation matrix of X {\displaystyle \mathbf {X} } . If each of X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} }

SECTION 50

#1732791987044

5616-2037: The cross-correlation matrix of X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } is defined by R X Y ≜   E ⁡ [ X Y ] {\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {Y} }\triangleq \ \operatorname {E} \left[\mathbf {X} \mathbf {Y} \right]} and has dimensions m × n {\displaystyle m\times n} . Written component-wise: R X Y = [ E ⁡ [ X 1 Y 1 ] E ⁡ [ X 1 Y 2 ] ⋯ E ⁡ [ X 1 Y n ] E ⁡ [ X 2 Y 1 ] E ⁡ [ X 2 Y 2 ] ⋯ E ⁡ [ X 2 Y n ] ⋮ ⋮ ⋱ ⋮ E ⁡ [ X m Y 1 ] E ⁡ [ X m Y 2 ] ⋯ E ⁡ [ X m Y n ] ] {\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {Y} }={\begin{bmatrix}\operatorname {E} [X_{1}Y_{1}]&\operatorname {E} [X_{1}Y_{2}]&\cdots &\operatorname {E} [X_{1}Y_{n}]\\\\\operatorname {E} [X_{2}Y_{1}]&\operatorname {E} [X_{2}Y_{2}]&\cdots &\operatorname {E} [X_{2}Y_{n}]\\\\\vdots &\vdots &\ddots &\vdots \\\\\operatorname {E} [X_{m}Y_{1}]&\operatorname {E} [X_{m}Y_{2}]&\cdots &\operatorname {E} [X_{m}Y_{n}]\end{bmatrix}}} The random vectors X {\displaystyle \mathbf {X} } and Y {\displaystyle \mathbf {Y} } need not have

5724-880: The cyclic group of integers modulo N . Circular convolution arises most often in the context of fast convolution with a fast Fourier transform (FFT) algorithm. In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution property can be used to implement the computation. For example, convolution of digit sequences is the kernel operation in multiplication of multi-digit numbers, which can therefore be efficiently implemented with transform techniques ( Knuth 1997 , §4.3.3.C; von zur Gathen & Gerhard 2003 , §8.2). Eq.1 requires N arithmetic operations per output value and N operations for N outputs. That can be significantly reduced with any of several fast algorithms. Digital signal processing and other applications typically use fast convolution algorithms to reduce

5832-458: The discrete-time Fourier transform , can be defined on a circle and convolved by periodic convolution . (See row 18 at DTFT § Properties .) A discrete convolution can be defined for functions on the set of integers . Generalizations of convolution have applications in the field of numerical analysis and numerical linear algebra , and in the design and implementation of finite impulse response filters in signal processing. Computing

5940-407: The inverse of the convolution operation is known as deconvolution . The convolution of f {\displaystyle f} and g {\displaystyle g} is written f ∗ g {\displaystyle f*g} , denoting the operator with the symbol ∗ {\displaystyle *} . It is defined as the integral of the product of

6048-683: The inverse function of a function f (also called the inverse of f ) is a function that undoes the operation of f . The inverse of f exists if and only if f is bijective , and if it exists, is denoted by f − 1 . {\displaystyle f^{-1}.} For a function f : X → Y {\displaystyle f\colon X\to Y} , its inverse f − 1 : Y → X {\displaystyle f^{-1}\colon Y\to X} admits an explicit description: it sends each element y ∈ Y {\displaystyle y\in Y} to

6156-429: The multiplicative inverse of f ( x ) and has nothing to do with the inverse function of f . The notation f ⟨ − 1 ⟩ {\displaystyle f^{\langle -1\rangle }} might be used for the inverse function to avoid ambiguity with the multiplicative inverse . In keeping with the general notation, some English authors use expressions like sin ( x ) to denote

6264-439: The x and y axes. This is equivalent to reflecting the graph across the line y = x . By the inverse function theorem , a continuous function of a single variable f : A → R {\displaystyle f\colon A\to \mathbb {R} } (where A ⊆ R {\displaystyle A\subseteq \mathbb {R} } ) is invertible on its range (image) if and only if it

6372-431: The (circular) cross-correlation is defined as: ( f ⋆ g ) [ n ]   ≜ ∑ m = 0 N − 1 f [ m ] ¯ g [ ( m + n ) mod   N ] {\displaystyle (f\star g)[n]\ \triangleq \sum _{m=0}^{N-1}{\overline {f[m]}}g[(m+n)_{{\text{mod}}~N}]} which

6480-500: The above sum is equal to ⟨ F ‖ F ‖ , T ‖ T ‖ ⟩ {\displaystyle \left\langle {\frac {F}{\|F\|}},{\frac {T}{\|T\|}}\right\rangle } where ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the inner product and ‖ ⋅ ‖ {\displaystyle \|\cdot \|}

6588-407: The ambiguity of the f notation should be avoided. The function f : R → [0,∞) given by f ( x ) = x is not injective because ( − x ) 2 = x 2 {\displaystyle (-x)^{2}=x^{2}} for all x ∈ R {\displaystyle x\in \mathbb {R} } . Therefore, f is not invertible. If the domain of

SECTION 60

#1732791987044

6696-494: The area under the function f ( τ ) {\displaystyle f(\tau )} weighted by the function g ( − τ ) {\displaystyle g(-\tau )} shifted by the amount t {\displaystyle t} . As t {\displaystyle t} changes, the weighting function g ( t − τ ) {\displaystyle g(t-\tau )} emphasizes different parts of

6804-435: The brightness of the image and template can vary due to lighting and exposure conditions, the images can be first normalized. This is typically done at every step by subtracting the mean and dividing by the standard deviation . That is, the cross-correlation of a template t ( x , y ) {\displaystyle t(x,y)} with a subimage f ( x , y ) {\displaystyle f(x,y)}

6912-420: The composition g  ∘  f is the function that first multiplies by three and then adds five, To reverse this process, we must first subtract five, and then divide by three, This is the composition ( f  ∘  g )( x ) . If X is a set, then the identity function on X is its own inverse: More generally, a function f  : X → X is equal to its own inverse, if and only if the composition f  ∘  f

7020-493: The convolution is also periodic and identical to : The summation on k {\displaystyle k} is called a periodic summation of the function f . {\displaystyle f.} If g N {\displaystyle g_{_{N}}} is a periodic summation of another function, g , {\displaystyle g,} then f ∗ g N {\displaystyle f*g_{_{N}}}

7128-497: The convolution is also periodic and identical to: where t 0 {\displaystyle t_{0}} is an arbitrary choice. The summation is called a periodic summation of the function f {\displaystyle f} . When g T {\displaystyle g_{T}} is a periodic summation of another function, g {\displaystyle g} , then f ∗ g T {\displaystyle f*g_{T}}

7236-531: The convolution operation ( f ∗ g ) ( t ) {\displaystyle (f*g)(t)} can be defined as the inverse Laplace transform of the product of F ( s ) {\displaystyle F(s)} and G ( s ) {\displaystyle G(s)} . More precisely, Let t = u + v {\displaystyle t=u+v} , then Note that F ( s ) ⋅ G ( s ) {\displaystyle F(s)\cdot G(s)}

7344-522: The cost of the convolution to O( N log N ) complexity. The most common fast convolution algorithms use fast Fourier transform (FFT) algorithms via the circular convolution theorem . Specifically, the circular convolution of two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of

7452-1337: The cross-correlation function to get a time-dependent Pearson correlation coefficient . However, in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "cross-correlation" and "cross-covariance" are used interchangeably. The definition of the normalized cross-correlation of a stochastic process is ρ X X ( t 1 , t 2 ) = K X X ⁡ ( t 1 , t 2 ) σ X ( t 1 ) σ X ( t 2 ) = E ⁡ [ ( X t 1 − μ t 1 ) ( X t 2 − μ t 2 ) ¯ ] σ X ( t 1 ) σ X ( t 2 ) {\displaystyle \rho _{XX}(t_{1},t_{2})={\frac {\operatorname {K} _{XX}(t_{1},t_{2})}{\sigma _{X}(t_{1})\sigma _{X}(t_{2})}}={\frac {\operatorname {E} \left[\left(X_{t_{1}}-\mu _{t_{1}}\right){\overline {\left(X_{t_{2}}-\mu _{t_{2}}\right)}}\right]}{\sigma _{X}(t_{1})\sigma _{X}(t_{2})}}} If

7560-409: The cross-correlation is defined as: ( f ⋆ g ) ( τ )   ≜ ∫ − ∞ ∞ f ( t ) ¯ g ( t + τ ) d t {\displaystyle (f\star g)(\tau )\ \triangleq \int _{-\infty }^{\infty }{\overline {f(t)}}g(t+\tau )\,dt} which

7668-886: The cross-correlation is defined as: ( f ⋆ g ) [ n ]   ≜ ∑ m = − ∞ ∞ f [ m ] ¯ g [ m + n ] {\displaystyle (f\star g)[n]\ \triangleq \sum _{m=-\infty }^{\infty }{\overline {f[m]}}g[m+n]} which is equivalent to: ( f ⋆ g ) [ n ]   ≜ ∑ m = − ∞ ∞ f [ m − n ] ¯ g [ m ] {\displaystyle (f\star g)[n]\ \triangleq \sum _{m=-\infty }^{\infty }{\overline {f[m-n]}}g[m]} For finite discrete functions f , g ∈ C N {\displaystyle f,g\in \mathbb {C} ^{N}} ,

7776-598: The cross-correlation matrix of Z {\displaystyle \mathbf {Z} } and W {\displaystyle \mathbf {W} } is defined by R Z W ≜   E ⁡ [ Z W H ] {\displaystyle \operatorname {R} _{\mathbf {Z} \mathbf {W} }\triangleq \ \operatorname {E} [\mathbf {Z} \mathbf {W} ^{\rm {H}}]} where H {\displaystyle {}^{\rm {H}}} denotes Hermitian transposition . In time series analysis and statistics ,

7884-404: The cross-correlation of f ( t ) {\displaystyle f(t)} and g ( − t ) {\displaystyle g(-t)} ) gives the probability density function of the sum X + Y {\displaystyle X+Y} . For continuous functions f {\displaystyle f} and g {\displaystyle g} ,

7992-435: The cross-correlation of a pair of random process is the correlation between values of the processes at different times, as a function of the two times. Let ( X t , Y t ) {\displaystyle (X_{t},Y_{t})} be a pair of random processes, and t {\displaystyle t} be any point in time ( t {\displaystyle t} may be an integer for

8100-469: The cross-correlation operator is the adjoint of the convolution operator. Convolution has applications that include probability , statistics , acoustics , spectroscopy , signal processing and image processing , geophysics , engineering , physics , computer vision and differential equations . The convolution can be defined for functions on Euclidean space and other groups (as algebraic structures ). For example, periodic functions , such as

8208-493: The cross-covariance and cross-correlation are independent of t {\displaystyle t} is precisely the additional information (beyond being individually wide-sense stationary) conveyed by the requirement that ( X t , Y t ) {\displaystyle (X_{t},Y_{t})} are jointly wide-sense stationary. The cross-correlation of a pair of jointly wide sense stationary stochastic processes can be estimated by averaging

8316-555: The definition is rather unfamiliar in older uses. The operation: is a particular case of composition products considered by the Italian mathematician Vito Volterra in 1913. When a function g T {\displaystyle g_{T}} is periodic, with period T {\displaystyle T} , then for functions, f {\displaystyle f} , such that f ∗ g T {\displaystyle f*g_{T}} exists,

8424-426: The definition of correlation always includes a standardising factor in such a way that correlations have values between −1 and +1. If X {\displaystyle X} and Y {\displaystyle Y} are two independent random variables with probability density functions f {\displaystyle f} and g {\displaystyle g} , respectively, then

8532-628: The definition of the cross-correlation between times t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} is R X Y ⁡ ( t 1 , t 2 ) ≜   E ⁡ [ X t 1 Y t 2 ¯ ] {\displaystyle \operatorname {R} _{XY}(t_{1},t_{2})\triangleq \ \operatorname {E} \left[X_{t_{1}}{\overline {Y_{t_{2}}}}\right]} where E {\displaystyle \operatorname {E} }

8640-1149: The function ρ X X {\displaystyle \rho _{XX}} is well-defined, its value must lie in the range [ − 1 , 1 ] {\displaystyle [-1,1]} , with 1 indicating perfect correlation and −1 indicating perfect anti-correlation . For jointly wide-sense stationary stochastic processes, the definition is ρ X Y ( τ ) = K X Y ⁡ ( τ ) σ X σ Y = E ⁡ [ ( X t − μ X ) ( Y t + τ − μ Y ) ¯ ] σ X σ Y {\displaystyle \rho _{XY}(\tau )={\frac {\operatorname {K} _{XY}(\tau )}{\sigma _{X}\sigma _{Y}}}={\frac {\operatorname {E} \left[\left(X_{t}-\mu _{X}\right){\overline {\left(Y_{t+\tau }-\mu _{Y}\right)}}\right]}{\sigma _{X}\sigma _{Y}}}} The normalization

8748-399: The function becomes one-to-one if we restrict to the domain x ≥ 0 , in which case (If we instead restrict to the domain x ≤ 0 , then the inverse is the negative of the square root of y .) Alternatively, there is no need to restrict the domain if we are content with the inverse being a multivalued function : Sometimes, this multivalued inverse is called the full inverse of f , and

8856-431: The function is restricted to the nonnegative reals, that is, we take the function f : [ 0 , ∞ ) → [ 0 , ∞ ) ;   x ↦ x 2 {\displaystyle f\colon [0,\infty )\to [0,\infty );\ x\mapsto x^{2}} with the same rule as before, then the function is bijective and so, invertible. The inverse function here

8964-400: The input function f ( τ ) {\displaystyle f(\tau )} ; If t {\displaystyle t} is a positive value, then g ( t − τ ) {\displaystyle g(t-\tau )} is equal to g ( − τ ) {\displaystyle g(-\tau )} that slides or is shifted along

9072-749: The integration from − ∞ {\displaystyle -\infty } to ∞ {\displaystyle \infty } is replaced by integration over any interval [ t 0 , t 0 + T ] {\displaystyle [t_{0},t_{0}+T]} of length T {\displaystyle T} : ( f ⋆ g ) ( τ )   ≜ ∫ t 0 t 0 + T f ( t ) ¯ g ( t + τ ) d t {\displaystyle (f\star g)(\tau )\ \triangleq \int _{t_{0}}^{t_{0}+T}{\overline {f(t)}}g(t+\tau )\,dt} which

9180-402: The integration limits can be truncated, resulting in: For the multi-dimensional formulation of convolution, see domain of definition (below). A common engineering notational convention is: which has to be interpreted carefully to avoid confusion. For instance, f ( t ) ∗ g ( t − t 0 ) {\displaystyle f(t)*g(t-t_{0})}

9288-422: The inverse function must be the converse relation, which is completely determined by f . There is a symmetry between a function and its inverse. Specifically, if f is an invertible function with domain X and codomain Y , then its inverse f has domain Y and image X , and the inverse of f is the original function f . In symbols, for functions f : X → Y and f : Y → X , This statement

9396-415: The inverse is given by the inverse function theorem, Using Leibniz's notation the formula above can be written as This result follows from the chain rule (see the article on inverse functions and differentiation ). The inverse function theorem can be generalized to functions of several variables. Specifically, a continuously differentiable multivariable function f : R → R is invertible in

9504-433: The inverse of f is the function f − 1 : R → R {\displaystyle f^{-1}\colon \mathbb {R} \to \mathbb {R} } defined by f − 1 ( y ) = y + 7 5 . {\displaystyle f^{-1}(y)={\frac {y+7}{5}}.} Let f be a function whose domain is the set X , and whose codomain

9612-399: The inverse of the sine function applied to x (actually a partial inverse ; see below). Other authors feel that this may be confused with the notation for the multiplicative inverse of sin ( x ) , which can be denoted as (sin ( x )) . To avoid any confusion, an inverse trigonometric function is often indicated by the prefix " arc " (for Latin arcus ). For instance, the inverse of

9720-787: The kernel cross-correlation is defined as: ( f ⋆ g ) [ n ]   ≜ ∑ m = 0 N − 1 f [ m ] ¯ K g [ ( m + n ) mod   N ] {\displaystyle (f\star g)[n]\ \triangleq \sum _{m=0}^{N-1}{\overline {f[m]}}K_{g}[(m+n)_{{\text{mod}}~N}]} where K g = [ k ( g , T 0 ( g ) ) , k ( g , T 1 ( g ) ) , … , k ( g , T N − 1 ( g ) ) ] {\displaystyle K_{g}=[k(g,T_{0}(g)),k(g,T_{1}(g)),\dots ,k(g,T_{N-1}(g))]}

9828-417: The mean and standard deviation of the process ( X t ) {\displaystyle (X_{t})} , which are constant over time due to stationarity; and similarly for ( Y t ) {\displaystyle (Y_{t})} , respectively. E ⁡ [   ] {\displaystyle \operatorname {E} [\ ]} indicates the expected value . That

9936-440: The most computationally efficient method available. Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as the overlap–save method and overlap–add method . A hybrid convolution method that combines block and FIR algorithms allows for a zero input-output latency that is useful for real-time convolution computations. The convolution of two complex-valued functions on R

10044-565: The output. Other fast convolution algorithms, such as the Schönhage–Strassen algorithm or the Mersenne transform, use fast Fourier transforms in other rings . The Winograd method is used as an alternative to the FFT. It significantly speeds up 1D, 2D, and 3D convolution. If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not

10152-412: The partial inverse: sin − 1 ⁡ ( x ) = { ( − 1 ) n arcsin ⁡ ( x ) + π n : n ∈ Z } {\displaystyle \sin ^{-1}(x)=\{(-1)^{n}\arcsin(x)+\pi n:n\in \mathbb {Z} \}} . Other inverse special functions are sometimes prefixed with the prefix "inv", if

10260-402: The portions (such as √ x and − √ x ) are called branches . The most important branch of a multivalued function (e.g. the positive square root) is called the principal branch , and its value at y is called the principal value of f ( y ) . For a continuous function on the real line, one branch is required between each pair of local extrema . For example, the inverse of

10368-436: The probability density of the difference Y − X {\displaystyle Y-X} is formally given by the cross-correlation (in the signal-processing sense) f ⋆ g {\displaystyle f\star g} ; however, this terminology is not used in probability and statistics. In contrast, the convolution f ∗ g {\displaystyle f*g} (equivalent to

10476-477: The product of samples measured from one process and samples measured from the other (and its time shifts). The samples included in the average can be an arbitrary subset of all the samples in the signal (e.g., samples within a finite time window or a sub-sampling of one of the signals). For a large number of samples, the average converges to the true cross-correlation. It is common practice in some disciplines (e.g. statistics and time series analysis ) to normalize

10584-484: The properties of the input, cross correlation between the input and output of a system with nonlinear dynamics can be completely blind to certain nonlinear effects. This problem arises because some quadratic moments can equal zero and this can incorrectly suggest that there is little "correlation" (in the sense of statistical dependence) between two signals, when in fact the two signals are strongly related by nonlinear dynamics. Inverse function In mathematics ,

10692-636: The same dimension, and either might be a scalar value. Where E {\displaystyle \operatorname {E} } is the expectation value . For example, if X = ( X 1 , X 2 , X 3 ) {\displaystyle \mathbf {X} =\left(X_{1},X_{2},X_{3}\right)} and Y = ( Y 1 , Y 2 ) {\displaystyle \mathbf {Y} =\left(Y_{1},Y_{2}\right)} are random vectors, then R X Y {\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {Y} }}

10800-562: The sequences are the coefficients of two polynomials , then the coefficients of the ordinary product of the two polynomials are the convolution of the original two sequences. This is known as the Cauchy product of the coefficients of the sequences. Thus when g has finite support in the set { − M , − M + 1 , … , M − 1 , M } {\displaystyle \{-M,-M+1,\dots ,M-1,M\}} (representing, for instance,

10908-405: The set Z {\displaystyle \mathbb {Z} } of integers, the discrete convolution of f {\displaystyle f} and g {\displaystyle g} is given by: or equivalently (see commutativity ) by: The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When

11016-400: The sine function is typically called the arcsine function, written as arcsin ( x ) . Similarly, the inverse of a hyperbolic function is indicated by the prefix " ar " (for Latin ārea ). For instance, the inverse of the hyperbolic sine function is typically written as arsinh ( x ) . The expressions like sin ( x ) can still be useful to distinguish the multivalued inverse from

11124-400: The time delay between two signals, e.g., for determining time delays for the propagation of acoustic signals across a microphone array. After calculating the cross-correlation between the two signals, the maximum (or minimum if the signals are negatively correlated) of the cross-correlation function indicates the point in time where the signals are best aligned; i.e., the time delay between

11232-399: The two functions after one is reflected about the y-axis and shifted. As such, it is a particular kind of integral transform : An equivalent definition is (see commutativity ): While the symbol t {\displaystyle t} is used above, it need not represent the time domain. At each t {\displaystyle t} , the convolution formula can be described as

11340-511: The two signals is determined by the argument of the maximum, or arg max of the cross-correlation , as in τ d e l a y = a r g m a x t ∈ R ( ( f ⋆ g ) ( t ) ) {\displaystyle \tau _{\mathrm {delay} }={\underset {t\in \mathbb {R} }{\operatorname {arg\,max} }}((f\star g)(t))} Terminology in image processing For image-processing applications in which

11448-402: The unique element x ∈ X {\displaystyle x\in X} such that f ( x ) = y . As an example, consider the real-valued function of a real variable given by f ( x ) = 5 x − 7 . One can think of f as the function which multiplies its input by 5 then subtracts 7 from the result. To undo this, one adds 7 to the input, then divides the result by 5. Therefore,

11556-512: The value of ( f ⋆ g ) {\displaystyle (f\star g)} is maximized. This is because when peaks (positive areas) are aligned, they make a large contribution to the integral. Similarly, when troughs (negative areas) align, they also make a positive contribution to the integral because the product of two negative numbers is positive. With complex-valued functions f {\displaystyle f} and g {\displaystyle g} , taking

11664-407: The x-axis. One can use the cross-correlation to find how much g {\displaystyle g} must be shifted along the x-axis to make it identical to f {\displaystyle f} . The formula essentially slides the g {\displaystyle g} function along the x-axis, calculating the integral of their product at each position. When the functions match,

#43956