The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality ) is an upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms . It is considered one of the most important and widely used inequalities in mathematics.
108-488: Inner products of vectors can describe finite sums (via finite-dimensional vector spaces), infinite series (via vectors in sequence spaces ), and integrals (via vectors in Hilbert spaces ). The inequality for sums was published by Augustin-Louis Cauchy ( 1821 ). The corresponding inequality for integrals was published by Viktor Bunyakovsky ( 1859 ) and Hermann Schwarz ( 1888 ). Schwarz gave
216-427: A ∗ a ) ≥ φ ( a ) φ ( a ∗ ) . {\displaystyle \varphi \left(a^{*}a\right)\geq \varphi (a)\varphi \left(a^{*}\right).} This extends the fact φ ( a ∗ a ) ⋅ 1 ≥ φ ( a ) ∗ φ (
324-400: A ∗ a ) . {\displaystyle \left|\varphi \left(b^{*}a\right)\right|^{2}\leq \varphi \left(b^{*}b\right)\varphi \left(a^{*}a\right).} The next two theorems are further examples in operator algebra. Kadison–Schwarz inequality (Named after Richard Kadison ) — If φ {\displaystyle \varphi } is
432-412: A 1 , a 2 , … , a k , {\displaystyle a_{1},a_{2},\dots ,a_{k},} not all zero, such that where 0 {\displaystyle \mathbf {0} } denotes the zero vector. This implies that at least one of the scalars is nonzero, say a 1 ≠ 0 {\displaystyle a_{1}\neq 0} , and
540-411: A i ≠ 0 {\displaystyle a_{i}\neq 0} ), this proves that the vectors v 1 , … , v k {\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}} are linearly dependent. As a consequence, the zero vector can not possibly belong to any collection of vectors that is linearly in dependent. Now consider
648-532: A i = 0 , {\displaystyle a_{i}=0,} which means that the vectors v 1 = ( 1 , 1 ) {\displaystyle \mathbf {v} _{1}=(1,1)} and v 2 = ( − 3 , 2 ) {\displaystyle \mathbf {v} _{2}=(-3,2)} are linearly independent. In order to determine if the three vectors in R 4 , {\displaystyle \mathbb {R} ^{4},} are linearly dependent, form
756-530: A j := 0 {\displaystyle a_{j}:=0} so that consequently a j v j = 0 v j = 0 {\displaystyle a_{j}\mathbf {v} _{j}=0\mathbf {v} _{j}=\mathbf {0} } ). Simplifying a 1 v 1 + ⋯ + a k v k {\displaystyle a_{1}\mathbf {v} _{1}+\cdots +a_{k}\mathbf {v} _{k}} gives: Because not all scalars are zero (in particular,
864-863: A ) ≤ ‖ φ ( 1 ) ‖ φ ( a ∗ a ) , and ‖ φ ( a ∗ b ) ‖ 2 ≤ ‖ φ ( a ∗ a ) ‖ ⋅ ‖ φ ( b ∗ b ) ‖ . {\displaystyle {\begin{aligned}\varphi (a)^{*}\varphi (a)&\leq \Vert \varphi (1)\Vert \varphi \left(a^{*}a\right),{\text{ and }}\\[5mu]\Vert \varphi \left(a^{*}b\right)\Vert ^{2}&\leq \Vert \varphi \left(a^{*}a\right)\Vert \cdot \Vert \varphi \left(b^{*}b\right)\Vert .\end{aligned}}} Finite sum In mathematics , summation
972-448: A ) = | φ ( a ) | 2 , {\displaystyle \varphi \left(a^{*}a\right)\cdot 1\geq \varphi (a)^{*}\varphi (a)=|\varphi (a)|^{2},} when φ {\displaystyle \varphi } is a linear functional. The case when a {\displaystyle a} is self-adjoint, that is, a = a ∗ , {\displaystyle a=a^{*},}
1080-415: A Bernoulli number , and ( p k ) {\displaystyle {\binom {p}{k}}} is a binomial coefficient . In the following summations, a is assumed to be different from 1. There exist very many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are
1188-454: A finite set of vectors: A finite set of vectors is linearly independent if the sequence obtained by ordering them is linearly independent. In other words, one has the following result that is often useful. A sequence of vectors is linearly independent if and only if it does not contain the same vector twice and the set of its vectors is linearly independent. An infinite set of vectors is linearly independent if every nonempty finite subset
SECTION 10
#17327806052451296-458: A relation of linear dependence between u {\displaystyle \mathbf {u} } and v . {\displaystyle \mathbf {v} .} The converse was proved at the beginning of this section, so the proof is complete. ◼ {\displaystyle \blacksquare } Consider an arbitrary pair of vectors u , v {\displaystyle \mathbf {u} ,\mathbf {v} } . Define
1404-435: A sequence of vectors is linearly independent if and only if 0 {\displaystyle \mathbf {0} } can be represented as a linear combination of its vectors in a unique way. If a sequence of vectors contains the same vector twice, it is necessarily dependent. The linear dependency of a sequence of vectors does not depend of the order of the terms in the sequence. This allows defining linear independence for
1512-420: A subset of vectors in a vector space is linearly dependent are central to determining the dimension of a vector space. A sequence of vectors v 1 , v 2 , … , v k {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\dots ,\mathbf {v} _{k}} from a vector space V is said to be linearly dependent , if there exist scalars
1620-401: A unital positive map, then for every normal element a {\displaystyle a} in its domain, we have φ ( a ∗ a ) ≥ φ ( a ∗ ) φ ( a ) {\displaystyle \varphi (a^{*}a)\geq \varphi \left(a^{*}\right)\varphi (a)} and φ (
1728-398: Is 0 {\displaystyle \mathbf {0} } (while the other is non-zero) then exactly one of (1) and (2) is true (with the other being false). The vectors u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } are linearly in dependent if and only if u {\displaystyle \mathbf {u} }
1836-467: Is a common problem to find closed-form expressions for the result. For example, Although such formulas do not always exist, many summation formulas have been discovered—with some of the most common and elementary ones being listed in the remainder of this article. Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol , ∑ {\textstyle \sum } , an enlarged form of
1944-714: Is a complex number satisfying | α | = 1 {\displaystyle |\alpha |=1} and α ⟨ u , v ⟩ = | ⟨ u , v ⟩ | {\displaystyle \alpha \langle \mathbf {u} ,\mathbf {v} \rangle =|\langle \mathbf {u} ,\mathbf {v} \rangle |} . Such an α {\displaystyle \alpha } exists since if ⟨ u , v ⟩ = 0 {\displaystyle \langle \mathbf {u} ,\mathbf {v} \rangle =0} then α {\displaystyle \alpha } can be taken to be 1. Since
2052-640: Is a direct consequence of the Cauchy–Schwarz inequality, obtained by using the dot product on R n {\displaystyle \mathbb {R} ^{n}} upon substituting u i ′ = u i v i t {\displaystyle u_{i}'={\frac {u_{i}}{\sqrt {v_{i}{\vphantom {t}}}}}} and v i ′ = v i t {\displaystyle v_{i}'={\textstyle {\sqrt {v_{i}{\vphantom {t}}}}}} . This form
2160-2441: Is a generalization of this. In any inner product space , the triangle inequality is a consequence of the Cauchy–Schwarz inequality, as is now shown: ‖ u + v ‖ 2 = ⟨ u + v , u + v ⟩ = ‖ u ‖ 2 + ⟨ u , v ⟩ + ⟨ v , u ⟩ + ‖ v ‖ 2 where ⟨ v , u ⟩ = ⟨ u , v ⟩ ¯ = ‖ u ‖ 2 + 2 Re ⟨ u , v ⟩ + ‖ v ‖ 2 ≤ ‖ u ‖ 2 + 2 | ⟨ u , v ⟩ | + ‖ v ‖ 2 ≤ ‖ u ‖ 2 + 2 ‖ u ‖ ‖ v ‖ + ‖ v ‖ 2 using CS = ( ‖ u ‖ + ‖ v ‖ ) 2 . {\displaystyle {\begin{alignedat}{4}\|\mathbf {u} +\mathbf {v} \|^{2}&=\langle \mathbf {u} +\mathbf {v} ,\mathbf {u} +\mathbf {v} \rangle &&\\&=\|\mathbf {u} \|^{2}+\langle \mathbf {u} ,\mathbf {v} \rangle +\langle \mathbf {v} ,\mathbf {u} \rangle +\|\mathbf {v} \|^{2}~&&~{\text{ where }}\langle \mathbf {v} ,\mathbf {u} \rangle ={\overline {\langle \mathbf {u} ,\mathbf {v} \rangle }}\\&=\|\mathbf {u} \|^{2}+2\operatorname {Re} \langle \mathbf {u} ,\mathbf {v} \rangle +\|\mathbf {v} \|^{2}&&\\&\leq \|\mathbf {u} \|^{2}+2|\langle \mathbf {u} ,\mathbf {v} \rangle |+\|\mathbf {v} \|^{2}&&\\&\leq \|\mathbf {u} \|^{2}+2\|\mathbf {u} \|\|\mathbf {v} \|+\|\mathbf {v} \|^{2}~&&~{\text{ using CS}}\\&={\bigl (}\|\mathbf {u} \|+\|\mathbf {v} \|{\bigr )}^{2}.&&\end{alignedat}}} Taking square roots gives
2268-808: Is a polynomial of degree 2 {\displaystyle 2} (unless u = 0 , {\displaystyle \mathbf {u} =0,} which is a case that was checked earlier). Since the sign of p {\displaystyle p} does not change, the discriminant of this polynomial must be non-positive: Δ = 4 ( | ⟨ u , v ⟩ | 2 − ‖ u ‖ 2 ‖ v ‖ 2 ) ≤ 0. {\displaystyle \Delta =4{\bigl (}\,|\langle \mathbf {u} ,\mathbf {v} \rangle |^{2}-\Vert \mathbf {u} \Vert ^{2}\Vert \mathbf {v} \Vert ^{2}{\bigr )}\leq 0.} The conclusion follows. For
SECTION 20
#17327806052452376-395: Is a positive linear functional on a C*-algebra A , {\displaystyle A,} then for all a , b ∈ A , {\displaystyle a,b\in A,} | φ ( b ∗ a ) | 2 ≤ φ ( b ∗ b ) φ (
2484-991: Is a scalar multiple of the other. If u = c v {\displaystyle \mathbf {u} =c\mathbf {v} } where c {\displaystyle c} is some scalar then | ⟨ u , v ⟩ | = | ⟨ c v , v ⟩ | = | c ⟨ v , v ⟩ | = | c | ‖ v ‖ ‖ v ‖ = ‖ c v ‖ ‖ v ‖ = ‖ u ‖ ‖ v ‖ {\displaystyle |\langle \mathbf {u} ,\mathbf {v} \rangle |=|\langle c\mathbf {v} ,\mathbf {v} \rangle |=|c\langle \mathbf {v} ,\mathbf {v} \rangle |=|c|\|\mathbf {v} \|\|\mathbf {v} \|=\|c\mathbf {v} \|\|\mathbf {v} \|=\|\mathbf {u} \|\|\mathbf {v} \|} which shows that equality holds in
2592-2431: Is a vector orthogonal to the vector v {\displaystyle \mathbf {v} } (Indeed, z {\displaystyle \mathbf {z} } is the projection of u {\displaystyle \mathbf {u} } onto the plane orthogonal to v . {\displaystyle \mathbf {v} .} ) We can thus apply the Pythagorean theorem to u = ⟨ u , v ⟩ ⟨ v , v ⟩ v + z {\displaystyle \mathbf {u} ={\frac {\langle \mathbf {u} ,\mathbf {v} \rangle }{\langle \mathbf {v} ,\mathbf {v} \rangle }}\mathbf {v} +\mathbf {z} } which gives ‖ u ‖ 2 = | ⟨ u , v ⟩ ⟨ v , v ⟩ | 2 ‖ v ‖ 2 + ‖ z ‖ 2 = | ⟨ u , v ⟩ | 2 ( ‖ v ‖ 2 ) 2 ‖ v ‖ 2 + ‖ z ‖ 2 = | ⟨ u , v ⟩ | 2 ‖ v ‖ 2 + ‖ z ‖ 2 ≥ | ⟨ u , v ⟩ | 2 ‖ v ‖ 2 . {\displaystyle \|\mathbf {u} \|^{2}=\left|{\frac {\langle \mathbf {u} ,\mathbf {v} \rangle }{\langle \mathbf {v} ,\mathbf {v} \rangle }}\right|^{2}\|\mathbf {v} \|^{2}+\|\mathbf {z} \|^{2}={\frac {|\langle \mathbf {u} ,\mathbf {v} \rangle |^{2}}{(\|\mathbf {v} \|^{2})^{2}}}\,\|\mathbf {v} \|^{2}+\|\mathbf {z} \|^{2}={\frac {|\langle \mathbf {u} ,\mathbf {v} \rangle |^{2}}{\|\mathbf {v} \|^{2}}}+\|\mathbf {z} \|^{2}\geq {\frac {|\langle \mathbf {u} ,\mathbf {v} \rangle |^{2}}{\|\mathbf {v} \|^{2}}}.} The Cauchy–Schwarz inequality follows by multiplying by ‖ v ‖ 2 {\displaystyle \|\mathbf {v} \|^{2}} and then taking
2700-548: Is always a non-negative real number (even if the inner product is complex-valued). By taking the square root of both sides of the above inequality, the Cauchy–Schwarz inequality can be written in its more familiar form in terms of the norm: Moreover, the two sides are equal if and only if u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } are linearly dependent . Sedrakyan's inequality , also known as Bergström 's inequality, Engel 's form, Titu 's lemma (or
2808-467: Is an n × m matrix and Λ is a column vector with m {\displaystyle m} entries, and we are again interested in A Λ = 0 . As we saw previously, this is equivalent to a list of n {\displaystyle n} equations. Consider the first m {\displaystyle m} rows of A {\displaystyle A} , the first m {\displaystyle m} equations; any solution of
2916-787: Is an index (i.e. an element of { 1 , … , k } {\displaystyle \{1,\ldots ,k\}} ) such that v i = 0 . {\displaystyle \mathbf {v} _{i}=\mathbf {0} .} Then let a i := 1 {\displaystyle a_{i}:=1} (alternatively, letting a i {\displaystyle a_{i}} be equal any other non-zero scalar will also work) and then let all other scalars be 0 {\displaystyle 0} (explicitly, this means that for any index j {\displaystyle j} other than i {\displaystyle i} (i.e. for j ≠ i {\displaystyle j\neq i} ), let
3024-572: Is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral. The formulae below involve finite sums; for infinite summations or finite summations of expressions involving trigonometric functions or other transcendental functions , see list of mathematical series . More generally, one has Faulhaber's formula for p > 1 {\displaystyle p>1} where B k {\displaystyle B_{k}} denotes
3132-404: Is defined up to the addition of a constant, and may be chosen as There is not always a closed-form expression for such a summation, but Faulhaber's formula provides a closed form in the case where f ( n ) = n k {\displaystyle f(n)=n^{k}} and, by linearity , for every polynomial function of n . Many such approximations can be obtained by
3240-606: Is especially helpful when the inequality involves fractions where the numerator is a perfect square . The real vector space R 2 {\displaystyle \mathbb {R} ^{2}} denotes the 2-dimensional plane. It is also the 2-dimensional Euclidean space where the inner product is the dot product . If u = ( u 1 , u 2 ) {\displaystyle \mathbf {u} =(u_{1},u_{2})} and v = ( v 1 , v 2 ) {\displaystyle \mathbf {v} =(v_{1},v_{2})} then
3348-493: Is henceforth assumed that v ≠ 0 . {\displaystyle \mathbf {v} \neq \mathbf {0} .} Let z := u − ⟨ u , v ⟩ ⟨ v , v ⟩ v . {\displaystyle \mathbf {z} :=\mathbf {u} -{\frac {\langle \mathbf {u} ,\mathbf {v} \rangle }{\langle \mathbf {v} ,\mathbf {v} \rangle }}\mathbf {v} .} It follows from
Cauchy–Schwarz inequality - Misplaced Pages Continue
3456-413: Is linearly independent. Conversely, an infinite set of vectors is linearly dependent if it contains a finite subset that is linearly dependent, or equivalently, if some vector in the set is a linear combination of other vectors in the set. An indexed family of vectors is linearly independent if it does not contain the same vector twice, and if the set of its vectors is linearly independent. Otherwise,
3564-671: Is more commonly used for inverting of the difference operator Δ {\displaystyle \Delta } , defined by: where f is a function defined on the nonnegative integers. Thus, given such a function f , the problem is to compute the antidifference of f , a function F = Δ − 1 f {\displaystyle F=\Delta ^{-1}f} such that Δ F = f {\displaystyle \Delta F=f} . That is, F ( n + 1 ) − F ( n ) = f ( n ) . {\displaystyle F(n+1)-F(n)=f(n).} This function
3672-676: Is not a scalar multiple of v {\displaystyle \mathbf {v} } and v {\displaystyle \mathbf {v} } is not a scalar multiple of u {\displaystyle \mathbf {u} } . Three vectors: Consider the set of vectors v 1 = ( 1 , 1 ) , {\displaystyle \mathbf {v} _{1}=(1,1),} v 2 = ( − 3 , 2 ) , {\displaystyle \mathbf {v} _{2}=(-3,2),} and v 3 = ( 2 , 4 ) , {\displaystyle \mathbf {v} _{3}=(2,4),} then
3780-399: Is not ignored, it becomes necessary to add a third vector to the linearly independent set. In general, n linearly independent vectors are required to describe all locations in n -dimensional space. If one or more vectors from a given sequence of vectors v 1 , … , v k {\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}}
3888-688: Is only possible if c ≠ 0 {\displaystyle c\neq 0} and v ≠ 0 {\displaystyle \mathbf {v} \neq \mathbf {0} } ; in this case, it is possible to multiply both sides by 1 c {\textstyle {\frac {1}{c}}} to conclude v = 1 c u . {\textstyle \mathbf {v} ={\frac {1}{c}}\mathbf {u} .} This shows that if u ≠ 0 {\displaystyle \mathbf {u} \neq \mathbf {0} } and v ≠ 0 {\displaystyle \mathbf {v} \neq \mathbf {0} } then (1)
3996-472: Is read as "sum of a i , from i = m to n ". Here is an example showing the summation of squares: In general, while any variable can be used as the index of summation (provided that no ambiguity is incurred), some of the most common ones include letters such as i {\displaystyle i} , j {\displaystyle j} , k {\displaystyle k} , and n {\displaystyle n} ;
4104-590: Is sensible, by showing that the right-hand side lies in the interval [−1, 1] and justifies the notion that (real) Hilbert spaces are simply generalizations of the Euclidean space . It can also be used to define an angle in complex inner-product spaces , by taking the absolute value or the real part of the right-hand side, as is done when extracting a metric from quantum fidelity . Let X {\displaystyle X} and Y {\displaystyle Y} be random variables . Then
4212-420: Is sometimes known as Kadison's inequality . Cauchy–Schwarz inequality (Modified Schwarz inequality for 2-positive maps) — For a 2-positive map φ {\displaystyle \varphi } between C*-algebras, for all a , b {\displaystyle a,b} in its domain, φ ( a ) ∗ φ (
4320-452: Is sufficient information to describe the location, because the geographic coordinate system may be considered as a 2-dimensional vector space (ignoring altitude and the curvature of the Earth's surface). The person might add, "The place is 5 miles northeast of here." This last statement is true , but it is not necessary to find the location. In this example the "3 miles north" vector and
4428-411: Is the addition of a sequence of numbers , called addends or summands ; the result is their sum or total . Beside numbers, other types of values can be summed as well: functions , vectors , matrices , polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined. Summations of infinite sequences are called series . They involve
Cauchy–Schwarz inequality - Misplaced Pages Continue
4536-1131: Is the angle between u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } . The form above is perhaps the easiest in which to understand the inequality, since the square of the cosine can be at most 1, which occurs when the vectors are in the same or opposite directions. It can also be restated in terms of the vector coordinates u 1 {\displaystyle u_{1}} , u 2 {\displaystyle u_{2}} , v 1 {\displaystyle v_{1}} , and v 2 {\displaystyle v_{2}} as ( u 1 v 1 + u 2 v 2 ) 2 ≤ ( u 1 2 + u 2 2 ) ( v 1 2 + v 2 2 ) , {\displaystyle \left(u_{1}v_{1}+u_{2}v_{2}\right)^{2}\leq \left(u_{1}^{2}+u_{2}^{2}\right)\left(v_{1}^{2}+v_{2}^{2}\right),} where equality holds if and only if
4644-735: Is the pointwise complex conjugate of g . {\displaystyle g.} In this language, the Cauchy–Schwarz inequality becomes | φ ( g ∗ f ) | 2 ≤ φ ( f ∗ f ) φ ( g ∗ g ) , {\displaystyle {\bigl |}\varphi (g^{*}f){\bigr |}^{2}\leq \varphi \left(f^{*}f\right)\varphi \left(g^{*}g\right),} which extends verbatim to positive functionals on C*-algebras: Cauchy–Schwarz inequality for positive functionals on C*-algebras — If φ {\displaystyle \varphi }
4752-1170: Is the field of real numbers R {\displaystyle \mathbb {R} } or complex numbers C . {\displaystyle \mathbb {C} .} Then with equality holding in the Cauchy–Schwarz Inequality if and only if u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } are linearly dependent . Moreover, if | ⟨ u , v ⟩ | = ‖ u ‖ ‖ v ‖ {\displaystyle \left|\langle \mathbf {u} ,\mathbf {v} \rangle \right|=\|\mathbf {u} \|\|\mathbf {v} \|} and v ≠ 0 {\displaystyle \mathbf {v} \neq \mathbf {0} } then u = ⟨ u , v ⟩ ‖ v ‖ 2 v . {\displaystyle \mathbf {u} ={\frac {\langle \mathbf {u} ,\mathbf {v} \rangle }{\|\mathbf {v} \|^{2}}}\mathbf {v} .} In both of
4860-462: Is the sum of f ( x ) {\displaystyle f(x)} over all elements x {\displaystyle x} in the set S {\displaystyle S} , and is the sum of μ ( d ) {\displaystyle \mu (d)} over all positive integers d {\displaystyle d} dividing n {\displaystyle n} . There are also ways to generalize
4968-396: Is the zero vector 0 {\displaystyle \mathbf {0} } then the vector v 1 , … , v k {\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}} are necessarily linearly dependent (and consequently, they are not linearly independent). To see why, suppose that i {\displaystyle i}
5076-457: Is the zero vector then u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } are necessarily linearly dependent (for example, if u = 0 {\displaystyle \mathbf {u} =\mathbf {0} } then u = c v {\displaystyle \mathbf {u} =c\mathbf {v} } where c = 0 {\displaystyle c=0} ), so
5184-714: Is true if and only if (2) is true; that is, in this particular case either both (1) and (2) are true (and the vectors are linearly dependent) or else both (1) and (2) are false (and the vectors are linearly in dependent). If u = c v {\displaystyle \mathbf {u} =c\mathbf {v} } but instead u = 0 {\displaystyle \mathbf {u} =\mathbf {0} } then at least one of c {\displaystyle c} and v {\displaystyle \mathbf {v} } must be zero. Moreover, if exactly one of u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} }
5292-780: Is true in this particular case. Similarly, if v = 0 {\displaystyle \mathbf {v} =\mathbf {0} } then (2) is true because v = 0 u . {\displaystyle \mathbf {v} =0\mathbf {u} .} If u = v {\displaystyle \mathbf {u} =\mathbf {v} } (for instance, if they are both equal to the zero vector 0 {\displaystyle \mathbf {0} } ) then both (1) and (2) are true (by using c := 1 {\displaystyle c:=1} for both). If u = c v {\displaystyle \mathbf {u} =c\mathbf {v} } then u ≠ 0 {\displaystyle \mathbf {u} \neq \mathbf {0} }
5400-449: Is zero. Explicitly, if v 1 {\displaystyle \mathbf {v} _{1}} is any vector then the sequence v 1 {\displaystyle \mathbf {v} _{1}} (which is a sequence of length 1 {\displaystyle 1} ) is linearly dependent if and only if v 1 = 0 {\displaystyle \mathbf {v} _{1}=\mathbf {0} } ; alternatively,
5508-771: The Cauchy–Schwarz Inequality . The case where v = c u {\displaystyle \mathbf {v} =c\mathbf {u} } for some scalar c {\displaystyle c} follows from the previous case: | ⟨ u , v ⟩ | = | ⟨ v , u ⟩ | = ‖ v ‖ ‖ u ‖ . {\displaystyle |\langle \mathbf {u} ,\mathbf {v} \rangle |=|\langle \mathbf {v} ,\mathbf {u} \rangle |=\|\mathbf {v} \|\|\mathbf {u} \|.} In particular, if at least one of u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} }
SECTION 50
#17327806052455616-414: The "4 miles east" vector are linearly independent. That is to say, the north vector cannot be described in terms of the east vector, and vice versa. The third "5 miles northeast" vector is a linear combination of the other two vectors, and it makes the set of vectors linearly dependent , that is, one of the three vectors is unnecessary to define a specific location on a plane. Also note that if altitude
5724-616: The Cauchy–Schwarz inequality becomes: ⟨ u , v ⟩ 2 = ( ‖ u ‖ ‖ v ‖ cos θ ) 2 ≤ ‖ u ‖ 2 ‖ v ‖ 2 , {\displaystyle \langle \mathbf {u} ,\mathbf {v} \rangle ^{2}={\bigl (}\|\mathbf {u} \|\|\mathbf {v} \|\cos \theta {\bigr )}^{2}\leq \|\mathbf {u} \|^{2}\|\mathbf {v} \|^{2},} where θ {\displaystyle \theta }
5832-452: The Cauchy–Schwarz inequality exist. Hölder's inequality generalizes it to L p {\displaystyle L^{p}} norms. More generally, it can be interpreted as a special case of the definition of the norm of a linear operator on a Banach space (Namely, when the space is a Hilbert space ). Further generalizations are in the context of operator theory , e.g. for operator-convex functions and operator algebras , where
5940-454: The Cauchy–Schwarz inequality other than those given below. When consulting other sources, there are often two sources of confusion. First, some authors define ⟨⋅,⋅⟩ to be linear in the second argument rather than the first. Second, some proofs are only valid when the field is R {\displaystyle \mathbb {R} } and not C . {\displaystyle \mathbb {C} .} This section gives two proofs of
6048-1530: The T2 lemma), states that for real numbers u 1 , u 2 , … , u n {\displaystyle u_{1},u_{2},\dots ,u_{n}} and positive real numbers v 1 , v 2 , … , v n {\displaystyle v_{1},v_{2},\dots ,v_{n}} : ( u 1 + u 2 + ⋯ + u n ) 2 v 1 + v 2 + ⋯ + v n ≤ u 1 2 v 1 + u 2 2 v 2 + ⋯ + u n 2 v n , {\displaystyle {\frac {\left(u_{1}+u_{2}+\cdots +u_{n}\right)^{2}}{v_{1}+v_{2}+\cdots +v_{n}}}\leq {\frac {u_{1}^{2}}{v_{1}}}+{\frac {u_{2}^{2}}{v_{2}}}+\cdots +{\frac {u_{n}^{2}}{v_{n}}},} or, using summation notation, ( ∑ i = 1 n u i ) 2 / ∑ i = 1 n v i ≤ ∑ i = 1 n u i 2 v i . {\displaystyle {\biggl (}\sum _{i=1}^{n}u_{i}{\biggr )}^{2}{\bigg /}\sum _{i=1}^{n}v_{i}\,\leq \,\sum _{i=1}^{n}{\frac {u_{i}^{2}}{v_{i}}}.} It
6156-481: The above computation shows that the Cauchy–Schwarz inequality holds in this case. Consequently, the Cauchy–Schwarz inequality only needs to be proven only for non-zero vectors and also only the non-trivial direction of the Equality Characterization must be shown. The special case of v = 0 {\displaystyle \mathbf {v} =\mathbf {0} } was proven above so it
6264-401: The above equation is able to be written as if k > 1 , {\displaystyle k>1,} and v 1 = 0 {\displaystyle \mathbf {v} _{1}=\mathbf {0} } if k = 1. {\displaystyle k=1.} Thus, a set of vectors is linearly dependent if and only if one of them is zero or a linear combination of
6372-2093: The bar notation is used for complex conjugation ), then the inequality may be restated more explicitly as follows: | ⟨ u , v ⟩ | 2 = | ∑ k = 1 n u k v ¯ k | 2 ≤ ⟨ u , u ⟩ ⟨ v , v ⟩ = ( ∑ k = 1 n u k u ¯ k ) ( ∑ k = 1 n v k v ¯ k ) = ∑ j = 1 n | u j | 2 ∑ k = 1 n | v k | 2 . {\displaystyle {\bigl |}\langle \mathbf {u} ,\mathbf {v} \rangle {\bigr |}^{2}={\Biggl |}\sum _{k=1}^{n}u_{k}{\bar {v}}_{k}{\Biggr |}^{2}\leq \langle \mathbf {u} ,\mathbf {u} \rangle \langle \mathbf {v} ,\mathbf {v} \rangle ={\biggl (}\sum _{k=1}^{n}u_{k}{\bar {u}}_{k}{\biggr )}{\biggl (}\sum _{k=1}^{n}v_{k}{\bar {v}}_{k}{\biggr )}=\sum _{j=1}^{n}|u_{j}|^{2}\sum _{k=1}^{n}|v_{k}|^{2}.} That is, | u 1 v ¯ 1 + ⋯ + u n v ¯ n | 2 ≤ ( | u 1 | 2 + ⋯ + | u n | 2 ) ( | v 1 | 2 + ⋯ + | v n | 2 ) . {\displaystyle {\bigl |}u_{1}{\bar {v}}_{1}+\cdots +u_{n}{\bar {v}}_{n}{\bigr |}^{2}\leq {\bigl (}|u_{1}|{}^{2}+\cdots +|u_{n}|{}^{2}{\bigr )}{\bigl (}|v_{1}|{}^{2}+\cdots +|v_{n}|{}^{2}{\bigr )}.} For
6480-692: The collection v 1 {\displaystyle \mathbf {v} _{1}} is linearly independent if and only if v 1 ≠ 0 . {\displaystyle \mathbf {v} _{1}\neq \mathbf {0} .} This example considers the special case where there are exactly two vector u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } from some real or complex vector space. The vectors u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } are linearly dependent if and only if at least one of
6588-404: The concept of limit , and are not considered in this article. The summation of an explicit sequence is denoted as a succession of additions. For example, summation of [1, 2, 4, 2] is denoted 1 + 2 + 4 + 2 , and results in 9, that is, 1 + 2 + 4 + 2 = 9 . Because addition is associative and commutative , there is no need for parentheses, and the result is the same irrespective of the order of
SECTION 60
#17327806052456696-812: The condition for linear dependence seeks a set of non-zero scalars, such that or Row reduce this matrix equation by subtracting the first row from the second to obtain, Continue the row reduction by (i) dividing the second row by 5, and then (ii) multiplying by 3 and adding to the first row, that is Rearranging this equation allows us to obtain which shows that non-zero a i exist such that v 3 = ( 2 , 4 ) {\displaystyle \mathbf {v} _{3}=(2,4)} can be defined in terms of v 1 = ( 1 , 1 ) {\displaystyle \mathbf {v} _{1}=(1,1)} and v 2 = ( − 3 , 2 ) . {\displaystyle \mathbf {v} _{2}=(-3,2).} Thus,
6804-400: The corresponding definite integral. One can therefore expect that for instance since the right-hand side is by definition the limit for n → ∞ {\displaystyle n\to \infty } of the left-hand side. However, for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f : it
6912-403: The covariance inequality is given by: Var ( X ) ≥ Cov ( X , Y ) 2 Var ( Y ) . {\displaystyle \operatorname {Var} (X)\geq {\frac {\operatorname {Cov} (X,Y)^{2}}{\operatorname {Var} (Y)}}.} After defining an inner product on the set of random variables using
7020-1914: The covariance inequality using the Cauchy–Schwarz inequality, let μ = E ( X ) {\displaystyle \mu =\operatorname {E} (X)} and ν = E ( Y ) , {\displaystyle \nu =\operatorname {E} (Y),} then | Cov ( X , Y ) | 2 = | E ( ( X − μ ) ( Y − ν ) ) | 2 = | ⟨ X − μ , Y − ν ⟩ | 2 ≤ ⟨ X − μ , X − μ ⟩ ⟨ Y − ν , Y − ν ⟩ = E ( ( X − μ ) 2 ) E ( ( Y − ν ) 2 ) = Var ( X ) Var ( Y ) , {\displaystyle {\begin{aligned}{\bigl |}\operatorname {Cov} (X,Y){\bigr |}^{2}&={\bigl |}\operatorname {E} ((X-\mu )(Y-\nu )){\bigr |}^{2}\\&={\bigl |}\langle X-\mu ,Y-\nu \rangle {\bigr |}^{2}\\&\leq \langle X-\mu ,X-\mu \rangle \langle Y-\nu ,Y-\nu \rangle \\&=\operatorname {E} \left((X-\mu )^{2}\right)\operatorname {E} \left((Y-\nu )^{2}\right)\\&=\operatorname {Var} (X)\operatorname {Var} (Y),\end{aligned}}} where Var {\displaystyle \operatorname {Var} } denotes variance and Cov {\displaystyle \operatorname {Cov} } denotes covariance . There are many different proofs of
7128-537: The determinant of A {\displaystyle A} , which is Since the determinant is non-zero, the vectors ( 1 , 1 ) {\displaystyle (1,1)} and ( − 3 , 2 ) {\displaystyle (-3,2)} are linearly independent. Otherwise, suppose we have m {\displaystyle m} vectors of n {\displaystyle n} coordinates, with m < n . {\displaystyle m<n.} Then A
7236-430: The difference of the right and the left hand side is 1 2 ∑ i = 1 n ∑ j = 1 n ( u i v j − u j v i ) 2 ≥ 0 {\displaystyle {\tfrac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}(u_{i}v_{j}-u_{j}v_{i})^{2}\geq 0} or by considering
7344-1109: The domain and/or range are replaced by a C*-algebra or W*-algebra . An inner product can be used to define a positive linear functional . For example, given a Hilbert space L 2 ( m ) , m {\displaystyle L^{2}(m),m} being a finite measure, the standard inner product gives rise to a positive functional φ {\displaystyle \varphi } by φ ( g ) = ⟨ g , 1 ⟩ . {\displaystyle \varphi (g)=\langle g,1\rangle .} Conversely, every positive linear functional φ {\displaystyle \varphi } on L 2 ( m ) {\displaystyle L^{2}(m)} can be used to define an inner product ⟨ f , g ⟩ φ := φ ( g ∗ f ) , {\displaystyle \langle f,g\rangle _{\varphi }:=\varphi \left(g^{*}f\right),} where g ∗ {\displaystyle g^{*}}
7452-1142: The equality case, notice that Δ = 0 {\displaystyle \Delta =0} happens if and only if p ( t ) = ( t ‖ u ‖ + ‖ v ‖ ) 2 . {\displaystyle p(t)={\bigl (}t\Vert \mathbf {u} \Vert +\Vert \mathbf {v} \Vert {\bigr )}^{2}.} If t 0 = − ‖ v ‖ / ‖ u ‖ , {\displaystyle t_{0}=-\Vert \mathbf {v} \Vert /\Vert \mathbf {u} \Vert ,} then p ( t 0 ) = ⟨ t 0 α u + v , t 0 α u + v ⟩ = 0 , {\displaystyle p(t_{0})=\langle t_{0}\alpha \mathbf {u} +\mathbf {v} ,t_{0}\alpha \mathbf {u} +\mathbf {v} \rangle =0,} and hence v = − t 0 α u . {\displaystyle \mathbf {v} =-t_{0}\alpha \mathbf {u} .} Various generalizations of
7560-581: The expectation of their product, ⟨ X , Y ⟩ := E ( X Y ) , {\displaystyle \langle X,Y\rangle :=\operatorname {E} (XY),} the Cauchy–Schwarz inequality becomes | E ( X Y ) | 2 ≤ E ( X 2 ) E ( Y 2 ) . {\displaystyle {\bigl |}\operatorname {E} (XY){\bigr |}^{2}\leq \operatorname {E} (X^{2})\operatorname {E} (Y^{2}).} To prove
7668-476: The fact that n {\displaystyle n} vectors in R n {\displaystyle \mathbb {R} ^{n}} are linearly independent if and only if the determinant of the matrix formed by taking the vectors as its columns is non-zero. In this case, the matrix formed by the vectors is We may write a linear combination of the columns as We are interested in whether A Λ = 0 for some nonzero vector Λ. This depends on
7776-417: The family is said to be linearly dependent . A set of vectors which is linearly independent and spans some vector space, forms a basis for that vector space. For example, the vector space of all polynomials in x over the reals has the (infinite) subset {1, x , x , ...} as a basis. A person describing the location of a certain place might say, "It is 3 miles north and 4 miles east of here." This
7884-519: The first 100 natural numbers may be written as 1 + 2 + 3 + 4 + ⋯ + 99 + 100 . Otherwise, summation is denoted by using Σ notation , where ∑ {\textstyle \sum } is an enlarged capital Greek letter sigma . For example, the sum of the first n natural numbers can be denoted as ∑ i = 1 n i {\textstyle \sum _{i=1}^{n}i} . For long summations, and summations of variable length (defined with ellipses or Σ notation), it
7992-712: The following quadratic polynomial in x {\displaystyle x} ( u 1 x + v 1 ) 2 + ⋯ + ( u n x + v n ) 2 = ( ∑ i u i 2 ) x 2 + 2 ( ∑ i u i v i ) x + ∑ i v i 2 . {\displaystyle (u_{1}x+v_{1})^{2}+\cdots +(u_{n}x+v_{n})^{2}={\biggl (}\sum _{i}u_{i}^{2}{\biggr )}x^{2}+2{\biggl (}\sum _{i}u_{i}v_{i}{\biggr )}x+\sum _{i}v_{i}^{2}.} Since
8100-466: The following connection between sums and integrals , which holds for any increasing function f : and for any decreasing function f : For more general approximations, see the Euler–Maclaurin formula . For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of
8208-490: The following is true: If u = 0 {\displaystyle \mathbf {u} =\mathbf {0} } then by setting c := 0 {\displaystyle c:=0} we have c v = 0 v = 0 = u {\displaystyle c\mathbf {v} =0\mathbf {v} =\mathbf {0} =\mathbf {u} } (this equality holds no matter what the value of v {\displaystyle \mathbf {v} } is), which shows that (1)
8316-401: The following theorem: Cauchy–Schwarz inequality — Let u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } be arbitrary vectors in an inner product space over the scalar field F , {\displaystyle \mathbb {F} ,} where F {\displaystyle \mathbb {F} }
8424-431: The following. In the following summations, n P k {\displaystyle {}_{n}P_{k}} is the number of k -permutations of n . The following are useful approximations (using theta notation ): Linear independence In the theory of vector spaces , a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of
8532-654: The full list of equations must also be true of the reduced list. In fact, if ⟨ i 1 ,..., i m ⟩ is any list of m {\displaystyle m} rows, then the equation must be true for those rows. Furthermore, the reverse is true. That is, we can test whether the m {\displaystyle m} vectors are linearly dependent by testing whether for all possible lists of m {\displaystyle m} rows. (In case m = n {\displaystyle m=n} , this requires only one determinant, as above. If m > n {\displaystyle m>n} , then it
8640-459: The function p : R → R {\displaystyle p:\mathbb {R} \to \mathbb {R} } defined by p ( t ) = ⟨ t α u + v , t α u + v ⟩ {\displaystyle p(t)=\langle t\alpha \mathbf {u} +\mathbf {v} ,t\alpha \mathbf {u} +\mathbf {v} \rangle } , where α {\displaystyle \alpha }
8748-1856: The inner product is positive-definite, p ( t ) {\displaystyle p(t)} only takes non-negative real values. On the other hand, p ( t ) {\displaystyle p(t)} can be expanded using the bilinearity of the inner product: p ( t ) = ⟨ t α u , t α u ⟩ + ⟨ t α u , v ⟩ + ⟨ v , t α u ⟩ + ⟨ v , v ⟩ = t α t α ¯ ⟨ u , u ⟩ + t α ⟨ u , v ⟩ + t α ¯ ⟨ v , u ⟩ + ⟨ v , v ⟩ = ‖ u ‖ 2 t 2 + 2 | ⟨ u , v ⟩ | t + ‖ v ‖ 2 {\displaystyle {\begin{aligned}p(t)&=\langle t\alpha \mathbf {u} ,t\alpha \mathbf {u} \rangle +\langle t\alpha \mathbf {u} ,\mathbf {v} \rangle +\langle \mathbf {v} ,t\alpha \mathbf {u} \rangle +\langle \mathbf {v} ,\mathbf {v} \rangle \\&=t\alpha t{\overline {\alpha }}\langle \mathbf {u} ,\mathbf {u} \rangle +t\alpha \langle \mathbf {u} ,\mathbf {v} \rangle +t{\overline {\alpha }}\langle \mathbf {v} ,\mathbf {u} \rangle +\langle \mathbf {v} ,\mathbf {v} \rangle \\&=\lVert \mathbf {u} \rVert ^{2}t^{2}+2|\langle \mathbf {u} ,\mathbf {v} \rangle |t+\lVert \mathbf {v} \rVert ^{2}\end{aligned}}} Thus, p {\displaystyle p}
8856-596: The inner product itself. The Cauchy–Schwarz inequality allows one to extend the notion of "angle between two vectors" to any real inner-product space by defining: cos θ u v = ⟨ u , v ⟩ ‖ u ‖ ‖ v ‖ . {\displaystyle \cos \theta _{\mathbf {u} \mathbf {v} }={\frac {\langle \mathbf {u} ,\mathbf {v} \rangle }{\|\mathbf {u} \|\|\mathbf {v} \|}}.} The Cauchy–Schwarz inequality proves that this definition
8964-510: The inner product on the vector space C n {\displaystyle \mathbb {C} ^{n}} is the canonical complex inner product (defined by ⟨ u , v ⟩ := u 1 v 1 ¯ + ⋯ + u n v n ¯ , {\displaystyle \langle \mathbf {u} ,\mathbf {v} \rangle :=u_{1}{\overline {v_{1}}}+\cdots +u_{n}{\overline {v_{n}}},} where
9072-748: The inner product space of square-integrable complex-valued functions , the following inequality holds. | ∫ R n f ( x ) g ( x ) ¯ d x | 2 ≤ ∫ R n | f ( x ) | 2 d x ∫ R n | g ( x ) | 2 d x . {\displaystyle \left|\int _{\mathbb {R} ^{n}}f(x){\overline {g(x)}}\,dx\right|^{2}\leq \int _{\mathbb {R} ^{n}}{\bigl |}f(x){\bigr |}^{2}\,dx\int _{\mathbb {R} ^{n}}{\bigl |}g(x){\bigr |}^{2}\,dx.} The Hölder inequality
9180-473: The integers. Given a function f that is defined over the integers in the interval [ m , n ] , the following equation holds: This is known as a telescoping series and is the analogue of the fundamental theorem of calculus in calculus of finite differences , which states that: where is the derivative of f . An example of application of the above equation is the following: Using binomial theorem , this may be rewritten as: The above formula
9288-410: The latter is also often used for the upper bound of a summation. Alternatively, index and bounds of summation are sometimes omitted from the definition of summation if the context is sufficiently clear. This applies particularly when the index runs from 1 to n . For example, one might write that: Generalizations of this notation are often used, in which an arbitrary logical condition is supplied, and
9396-1403: The latter polynomial is nonnegative, it has at most one real root, hence its discriminant is less than or equal to zero. That is, ( ∑ i u i v i ) 2 − ( ∑ i u i 2 ) ( ∑ i v i 2 ) ≤ 0. {\displaystyle {\biggl (}\sum _{i}u_{i}v_{i}{\biggr )}^{2}-{\biggl (}\sum _{i}{u_{i}^{2}}{\biggr )}{\biggl (}\sum _{i}{v_{i}^{2}}{\biggr )}\leq 0.} If u , v ∈ C n {\displaystyle \mathbf {u} ,\mathbf {v} \in \mathbb {C} ^{n}} with u = ( u 1 , … , u n ) {\displaystyle \mathbf {u} =(u_{1},\ldots ,u_{n})} and v = ( v 1 , … , v n ) {\displaystyle \mathbf {v} =(v_{1},\ldots ,v_{n})} (where u 1 , … , u n ∈ C {\displaystyle u_{1},\ldots ,u_{n}\in \mathbb {C} } and v 1 , … , v n ∈ C {\displaystyle v_{1},\ldots ,v_{n}\in \mathbb {C} } ) and if
9504-1035: The linearity of the inner product in its first argument that: ⟨ z , v ⟩ = ⟨ u − ⟨ u , v ⟩ ⟨ v , v ⟩ v , v ⟩ = ⟨ u , v ⟩ − ⟨ u , v ⟩ ⟨ v , v ⟩ ⟨ v , v ⟩ = 0. {\displaystyle \langle \mathbf {z} ,\mathbf {v} \rangle =\left\langle \mathbf {u} -{\frac {\langle \mathbf {u} ,\mathbf {v} \rangle }{\langle \mathbf {v} ,\mathbf {v} \rangle }}\mathbf {v} ,\mathbf {v} \right\rangle =\langle \mathbf {u} ,\mathbf {v} \rangle -{\frac {\langle \mathbf {u} ,\mathbf {v} \rangle }{\langle \mathbf {v} ,\mathbf {v} \rangle }}\langle \mathbf {v} ,\mathbf {v} \rangle =0.} Therefore, z {\displaystyle \mathbf {z} }
9612-549: The matrix equation, Row reduce this equation to obtain, Rearrange to solve for v 3 and obtain, This equation is easily solved to define non-zero a i , where a 3 {\displaystyle a_{3}} can be chosen arbitrarily. Thus, the vectors v 1 , v 2 , {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},} and v 3 {\displaystyle \mathbf {v} _{3}} are linearly dependent. An alternative method relies on
9720-448: The modern proof of the integral version. The Cauchy–Schwarz inequality states that for all vectors u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } of an inner product space where ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the inner product . Examples of inner products include
9828-409: The notation of measure and integration theory, a sum can be expressed as a definite integral , where [ a , b ] {\displaystyle [a,b]} is the subset of the integers from a {\displaystyle a} to b {\displaystyle b} , and where μ {\displaystyle \mu } is the counting measure over
9936-559: The others. A sequence of vectors v 1 , v 2 , … , v n {\displaystyle \mathbf {v} _{1},\mathbf {v} _{2},\dots ,\mathbf {v} _{n}} is said to be linearly independent if it is not linearly dependent, that is, if the equation can only be satisfied by a i = 0 {\displaystyle a_{i}=0} for i = 1 , … , n . {\displaystyle i=1,\dots ,n.} This implies that no vector in
10044-712: The proof of the Equality Characterization given above; that is, it proves that if u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } are linearly dependent then | ⟨ u , v ⟩ | = ‖ u ‖ ‖ v ‖ . {\displaystyle {\bigl |}\langle \mathbf {u} ,\mathbf {v} \rangle {\bigr |}=\|\mathbf {u} \|\|\mathbf {v} \|.} By definition, u {\displaystyle \mathbf {u} } and v {\displaystyle \mathbf {v} } are linearly dependent if and only if one
10152-399: The proofs given below, the proof in the trivial case where at least one of the vectors is zero (or equivalently, in the case where ‖ u ‖ ‖ v ‖ = 0 {\displaystyle \|\mathbf {u} \|\|\mathbf {v} \|=0} ) is the same. It is presented immediately below only once to reduce repetition. It also includes the easy part of
10260-707: The real and complex dot product ; see the examples in inner product . Every inner product gives rise to a Euclidean ℓ 2 {\displaystyle \ell _{2}} norm , called the canonical or induced norm , where the norm of a vector u {\displaystyle \mathbf {u} } is denoted and defined by ‖ u ‖ := ⟨ u , u ⟩ , {\displaystyle \|\mathbf {u} \|:={\sqrt {\langle \mathbf {u} ,\mathbf {u} \rangle }},} where ⟨ u , u ⟩ {\displaystyle \langle \mathbf {u} ,\mathbf {u} \rangle }
10368-437: The sequence can be represented as a linear combination of the remaining vectors in the sequence. In other words, a sequence of vectors is linearly independent if the only representation of 0 {\displaystyle \mathbf {0} } as a linear combination of its vectors is the trivial representation in which all the scalars a i {\textstyle a_{i}} are zero. Even more concisely,
10476-433: The special case where the sequence of v 1 , … , v k {\displaystyle \mathbf {v} _{1},\dots ,\mathbf {v} _{k}} has length 1 {\displaystyle 1} (i.e. the case where k = 1 {\displaystyle k=1} ). A collection of vectors that consists of exactly one vector is linearly dependent if and only if that vector
10584-475: The square root. Moreover, if the relation ≥ {\displaystyle \geq } in the above expression is actually an equality, then ‖ z ‖ 2 = 0 {\displaystyle \|\mathbf {z} \|^{2}=0} and hence z = 0 ; {\displaystyle \mathbf {z} =\mathbf {0} ;} the definition of z {\displaystyle \mathbf {z} } then establishes
10692-717: The standard inner product, which is the dot product , the Cauchy–Schwarz inequality becomes: ( ∑ i = 1 n u i v i ) 2 ≤ ( ∑ i = 1 n u i 2 ) ( ∑ i = 1 n v i 2 ) . {\displaystyle {\biggl (}\sum _{i=1}^{n}u_{i}v_{i}{\biggr )}^{2}\leq {\biggl (}\sum _{i=1}^{n}u_{i}^{2}{\biggr )}{\biggl (}\sum _{i=1}^{n}v_{i}^{2}{\biggr )}.} The Cauchy–Schwarz inequality can be proved using only elementary algebra in this case by observing that
10800-427: The sum is intended to be taken over all values satisfying the condition. For example: is an alternative notation for ∑ k = 0 99 f ( k ) , {\textstyle \sum _{k=0}^{99}f(k),} the sum of f ( k ) {\displaystyle f(k)} over all ( integers ) k {\displaystyle k} in the specified range. Similarly,
10908-444: The summands. Summation of a sequence of only one summand results in this summand itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0. Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of
11016-546: The summation notation gives a degenerate result in a special case. For example, if n = m {\displaystyle n=m} in the definition above, then there is only one term in the sum; if n = m − 1 {\displaystyle n=m-1} , then there is none. The phrase 'algebraic sum' refers to a sum of terms which may have positive or negative signs. Terms with positive signs are added, while terms with negative signs are subtracted. Summation may be defined recursively as follows: In
11124-449: The three vectors are linearly dependent. Two vectors: Now consider the linear dependence of the two vectors v 1 = ( 1 , 1 ) {\displaystyle \mathbf {v} _{1}=(1,1)} and v 2 = ( − 3 , 2 ) , {\displaystyle \mathbf {v} _{2}=(-3,2),} and check, or The same row reduction presented above yields, This shows that
11232-414: The triangle inequality: ‖ u + v ‖ ≤ ‖ u ‖ + ‖ v ‖ . {\displaystyle \|\mathbf {u} +\mathbf {v} \|\leq \|\mathbf {u} \|+\|\mathbf {v} \|.} The Cauchy–Schwarz inequality is used to prove that the inner product is a continuous function with respect to the topology induced by
11340-459: The upright capital Greek letter sigma . This is defined as where i is the index of summation ; a i is an indexed variable representing each term of the sum; m is the lower bound of summation , and n is the upper bound of summation . The " i = m " under the summation symbol means that the index i starts out equal to m . The index, i , is incremented by one for each successive term, stopping when i = n . This
11448-464: The use of many sigma signs. For example, is the same as A similar notation is used for the product of a sequence , where ∏ {\textstyle \prod } , an enlarged form of the Greek capital letter pi , is used instead of ∑ . {\textstyle \sum .} It is possible to sum fewer than 2 numbers: These degenerate cases are usually only used when
11556-453: The vector ( u 1 , u 2 ) {\displaystyle \left(u_{1},u_{2}\right)} is in the same or opposite direction as the vector ( v 1 , v 2 ) {\displaystyle \left(v_{1},v_{2}\right)} , or if one of them is the zero vector. In Euclidean space R n {\displaystyle \mathbb {R} ^{n}} with
11664-403: The vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent . These concepts are central to the definition of dimension . A vector space can be of finite dimension or infinite dimension depending on the maximum number of linearly independent vectors. The definition of linear dependence and the ability to determine whether
#244755