Misplaced Pages

MANIAC I

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The MANIAC I (Mathematical Analyzer Numerical Integrator and Automatic Computer Model I) was an early computer built under the direction of Nicholas Metropolis at the Los Alamos Scientific Laboratory . It was based on the von Neumann architecture of the IAS , developed by John von Neumann . As with almost all computers of its era, it was a one-of-a-kind machine that could not exchange programs with other computers (even the several other machines based on the IAS). Metropolis chose the name MANIAC in the hope of stopping the rash of silly acronyms for machine names, although von Neumann may have suggested the name to him.

#98901

69-531: The MANIAC weighed about 1,000 pounds (0.50 short tons; 0.45 t). The first task assigned to the Los Alamos MANIAC was to perform more precise and extensive calculations of the thermonuclear process. In 1953, the MANIAC obtained the first equation of state calculated by modified Monte Carlo integration over configuration space. In 1956, MANIAC I became the first computer to defeat a human being in

138-417: A + σ b 2 ( f ) 4 N b {\displaystyle \mathrm {Var} (f)={\frac {\sigma _{a}^{2}(f)}{4N_{a}}}+{\frac {\sigma _{b}^{2}(f)}{4N_{b}}}} It can be shown that this variance is minimized by distributing the points such that, N a N a + N b = σ

207-474: A σ a + σ b {\displaystyle {\frac {N_{a}}{N_{a}+N_{b}}}={\frac {\sigma _{a}}{\sigma _{a}+\sigma _{b}}}} Hence the smallest error estimate is obtained by allocating sample points in proportion to the standard deviation of the function in each sub-region. The MISER algorithm proceeds by bisecting the integration region along one coordinate axis to give two sub-regions at each step. The direction

276-538: A r ( Q N ) = V 2 N 2 ∑ i = 1 N V a r ( f ) = V 2 V a r ( f ) N = V 2 E ( σ N 2 ) N . {\displaystyle \mathrm {Var} (Q_{N})={\frac {V^{2}}{N^{2}}}\sum _{i=1}^{N}\mathrm {Var} (f)=V^{2}{\frac {\mathrm {Var} (f)}{N}}=V^{2}{\frac {\mathrm {E} (\sigma _{N}^{2})}{N}}.} Since

345-399: A Monte Carlo integration is the estimation of π. Consider the function H ( x , y ) = { 1 if  x 2 + y 2 ≤ 1 0 else {\displaystyle H\left(x,y\right)={\begin{cases}1&{\text{if }}x^{2}+y^{2}\leq 1\\0&{\text{else}}\end{cases}}} and

414-514: A chess-like game. The chess variant, called Los Alamos chess , was developed for a 6×6 chessboard (no bishops) due to the limited amount of memory and computing power of the machine. The MANIAC ran successfully in March 1952 and was shut down on July 15, 1958. However, it was transferred to the University of New Mexico in bad condition, and was restored to full operation by Dale Sparks, PhD. It

483-567: A collection of independent and identically distributed (iid) samples from a random variable with finite mean, the sample mean converges in probability to the expected value That is, for any positive number ε , lim n → ∞ Pr ( | X ¯ n − μ | < ε ) = 1. {\displaystyle \lim _{n\to \infty }\Pr \!\left(\,|{\overline {X}}_{n}-\mu |<\varepsilon \,\right)=1.} Interpreting this result,

552-451: A number of additional features, and combines both stratified sampling and importance sampling. Law of large numbers In probability theory , the law of large numbers ( LLN ) is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the LLN states that given

621-409: A particle filter), and mean-field particle methods . In numerical integration, methods such as the trapezoidal rule use a deterministic approach . Monte Carlo integration, on the other hand, employs a non-deterministic approach: each realization provides a different outcome. In Monte Carlo, the final outcome is an approximation of the correct value with respective error bars, and the correct value

690-690: A particular sample twice as much as other samples, we weight it half as much as the other samples. This estimator is naturally valid for uniform sampling, the case where p ( x ¯ ) {\displaystyle p({\overline {\mathbf {x} }})} is constant. The Metropolis–Hastings algorithm is one of the most used algorithms to generate x ¯ {\displaystyle {\overline {\mathbf {x} }}} from p ( x ¯ ) {\displaystyle p({\overline {\mathbf {x} }})} , thus providing an efficient way of computing integrals. The VEGAS algorithm approximates

759-407: A player will eventually be overcome by the parameters of the game. Importantly, the law applies (as the name indicates) only when a large number of observations are considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be "balanced" by the others (see the gambler's fallacy ). The LLN only applies to

SECTION 10

#1732779925099

828-547: A process of integrating the function f ( x ) = 1 1 + sinh ⁡ ( 2 x ) log ⁡ ( x ) 2 {\displaystyle f(x)={\frac {1}{1+\sinh(2x)\log(x)^{2}}}} from 0.8 < x < 3 {\displaystyle 0.8<x<3} using the Monte-Carlo method in Mathematica : Recursive stratified sampling

897-421: A sample of independent and identically distributed values, the sample mean converges to the true mean . The LLN is important because it guarantees stable long-term results for the averages of some random events . For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by

966-426: A single roll of a fair, six-sided die produces one of the numbers 1, 2, 3, 4, 5, or 6, each with equal probability . Therefore, the expected value of the average of the rolls is: 1 + 2 + 3 + 4 + 5 + 6 6 = 3.5 {\displaystyle {\frac {1+2+3+4+5+6}{6}}=3.5} According to the law of large numbers, if a large number of six-sided dice are rolled,

1035-465: A very important tool to perform Monte-Carlo integration. The main result of importance sampling to this method is that the uniform sampling of x ¯ {\displaystyle {\overline {\mathbf {x} }}} is a particular case of a more generic choice, on which the samples are drawn from any distribution p ( x ¯ ) {\displaystyle p({\overline {\mathbf {x} }})} . The idea

1104-698: A very small part of them would be significant to the integral. This can be improved by choosing a different distribution from where the samples are chosen, for instance by sampling according to a gaussian distribution centered at 0, with σ = 1. Of course the "right" choice strongly depends on the integrand. Formally, given a set of samples chosen from a distribution p ( x ¯ ) : x ¯ 1 , ⋯ , x ¯ N ∈ V , {\displaystyle p({\overline {\mathbf {x} }}):\qquad {\overline {\mathbf {x} }}_{1},\cdots ,{\overline {\mathbf {x} }}_{N}\in V,}

1173-399: Is standard error of the mean multiplied with V {\displaystyle V} . This result does not depend on the number of dimensions of the integral, which is the promised advantage of Monte Carlo integration against most deterministic methods that depend exponentially on the dimension. It is important to notice that, unlike in deterministic methods, the estimate of the error

1242-501: Is a generalization of one-dimensional adaptive quadratures to multi-dimensional integrals. On each recursion step the integral and the error are estimated using a plain Monte Carlo algorithm. If the error estimate is larger than the required accuracy the integration volume is divided into sub-volumes and the procedure is recursively applied to sub-volumes. The ordinary 'dividing by two' strategy does not work for multi-dimensions as

1311-485: Is a particular Monte Carlo method that numerically computes a definite integral . While other algorithms usually evaluate the integrand at a regular grid, Monte Carlo randomly chooses points at which the integrand is evaluated. This method is particularly useful for higher-dimensional integrals. There are different methods to perform a Monte Carlo integration, such as uniform sampling , stratified sampling , importance sampling , sequential Monte Carlo (also known as

1380-545: Is assumed finite, this variance decreases asymptotically to zero as 1/ N . The estimation of the error of Q N is thus δ Q N ≈ V a r ( Q N ) = V V a r ( f ) N , {\displaystyle \delta Q_{N}\approx {\sqrt {\mathrm {Var} (Q_{N})}}=V{\frac {\sqrt {\mathrm {Var} (f)}}{\sqrt {N}}},} which decreases as 1 N {\displaystyle {\tfrac {1}{\sqrt {N}}}} . This

1449-960: Is because the law of large numbers ensures that lim N → ∞ Q N = I . {\displaystyle \lim _{N\to \infty }Q_{N}=I.} Given the estimation of I from Q N , the error bars of Q N can be estimated by the sample variance using the unbiased estimate of the variance . V a r ( f ) = E ( σ N 2 ) ≡ 1 N − 1 ∑ i = 1 N E [ ( f ( x ¯ i ) − ⟨ f ⟩ ) 2 ] . {\displaystyle \mathrm {Var} (f)=\mathrm {E} (\sigma _{N}^{2})\equiv {\frac {1}{N-1}}\sum _{i=1}^{N}\mathrm {E} \left[\left(f({\overline {\mathbf {x} }}_{i})-\langle f\rangle \right)^{2}\right].} which leads to V

SECTION 20

#1732779925099

1518-422: Is chosen by examining all d possible bisections and selecting the one which will minimize the combined variance of the two sub-regions. The variance in the sub-regions is estimated by sampling with a fraction of the total number of points available to the current step. The same procedure is then repeated recursively for each of the two half-spaces from the best bisection. The remaining sample points are allocated to

1587-494: Is finite. It does not mean that the associated probability measure is absolutely continuous with respect to Lebesgue measure .) Introductory probability texts often additionally assume identical finite variance Var ⁡ ( X i ) = σ 2 {\displaystyle \operatorname {Var} (X_{i})=\sigma ^{2}} (for all i {\displaystyle i} ) and no correlation between random variables. In that case,

1656-445: Is increased the selection bias remains. The Italian mathematician Gerolamo Cardano (1501–1576) stated without proof that the accuracies of empirical statistics tend to improve with the number of trials. This was then formalized as a law of large numbers. A special form of the LLN (for a binary random variable) was first proved by Jacob Bernoulli . It took him over 20 years to develop a sufficiently rigorous mathematical proof which

1725-929: Is known as Kolmogorov's strong law , see e.g. Sen & Singer (1993 , Theorem 2.3.10). The weak law states that for a specified large n , the average X ¯ n {\displaystyle {\overline {X}}_{n}} is likely to be near μ . Thus, it leaves open the possibility that | X ¯ n − μ | > ε {\displaystyle |{\overline {X}}_{n}-\mu |>\varepsilon } happens an infinite number of times, although at infrequent intervals. (Not necessarily | X ¯ n − μ | ≠ 0 {\displaystyle |{\overline {X}}_{n}-\mu |\neq 0} for all n ). The strong law shows that this almost surely will not occur. It does not imply that with probability 1, we have that for any ε > 0

1794-695: Is likely to be within those error bars. The problem Monte Carlo integration addresses is the computation of a multidimensional definite integral I = ∫ Ω f ( x ¯ ) d x ¯ {\displaystyle I=\int _{\Omega }f({\overline {\mathbf {x} }})\,d{\overline {\mathbf {x} }}} where Ω, a subset of R m {\displaystyle \mathbb {R} ^{m}} , has volume V = ∫ Ω d x ¯ {\displaystyle V=\int _{\Omega }d{\overline {\mathbf {x} }}} The naive Monte Carlo approach

1863-413: Is not a strict error bound; random sampling may not uncover all the important features of the integrand that can result in an underestimate of the error. While the naive Monte Carlo works for simple examples, an improvement over deterministic algorithms can only be accomplished with algorithms that use problem-specific sampling distributions. With an appropriate sample distribution it is possible to exploit

1932-418: Is that p ( x ¯ ) {\displaystyle p({\overline {\mathbf {x} }})} can be chosen to decrease the variance of the measurement Q N . Consider the following example where one would like to numerically integrate a gaussian function, centered at 0, with σ = 1, from −1000 to 1000. Naturally, if the samples are drawn uniformly on the interval [−1000, 1000], only

2001-418: Is that the probability that, as the number of trials n goes to infinity, the average of the observations converges to the expected value, is equal to one. The modern proof of the strong law is more complex than that of the weak law, and relies on passing to an appropriate subsequence. The strong law of large numbers can itself be seen as a special case of the pointwise ergodic theorem . This view justifies

2070-723: Is to sample points uniformly on Ω: given N uniform samples, x ¯ 1 , ⋯ , x ¯ N ∈ Ω , {\displaystyle {\overline {\mathbf {x} }}_{1},\cdots ,{\overline {\mathbf {x} }}_{N}\in \Omega ,} I can be approximated by I ≈ Q N ≡ V 1 N ∑ i = 1 N f ( x ¯ i ) = V ⟨ f ⟩ . {\displaystyle I\approx Q_{N}\equiv V{\frac {1}{N}}\sum _{i=1}^{N}f({\overline {\mathbf {x} }}_{i})=V\langle f\rangle .} This

2139-416: Is zero, but the expected value does not exist, and indeed the average of n such variables have the same distribution as one such variable. It does not converge in probability toward zero (or any other value) as n goes to infinity. And if the trials embed a selection bias , typical in human economic/rational behaviour, the law of large numbers does not help in solving the bias. Even if the number of trials

MANIAC I - Misplaced Pages Continue

2208-420: The average of the results obtained from repeated trials and claims that this average converges to the expected value; it does not claim that the sum of n results gets close to the expected value times n as n increases. Throughout its history, many mathematicians have refined this law. Today, the LLN is used in many fields including statistics, probability theory, economics, and insurance. For example,

2277-480: The average of n such variables (assuming they are independent and identically distributed (i.i.d.) ) is precisely the relative frequency. For example, a fair coin toss is a Bernoulli trial. When a fair coin is flipped once, the theoretical probability that the outcome will be heads is equal to 1 ⁄ 2 . Therefore, according to the law of large numbers, the proportion of heads in a "large" number of coin flips "should be" roughly 1 ⁄ 2 . In particular,

2346-437: The average of a set of normally distributed variables). The variance of the sum is equal to the sum of the variances, which is asymptotic to n 2 / log ⁡ n {\displaystyle n^{2}/\log n} . The variance of the average is therefore asymptotic to 1 / log ⁡ n {\displaystyle 1/\log n} and goes to zero. There are also examples of

2415-434: The average of the first n values goes to zero as n goes to infinity. As an example, assume that each random variable in the series follows a Gaussian distribution (normal distribution) with mean zero, but with variance equal to 2 n / log ⁡ ( n + 1 ) {\displaystyle 2n/\log(n+1)} , which is not bounded. At each stage, the average will be normally distributed (as

2484-406: The average of their values (sometimes called the sample mean ) will approach 3.5, with the precision increasing as more dice are rolled. It follows from the law of large numbers that the empirical probability of success in a series of Bernoulli trials will converge to the theoretical probability. For a Bernoulli random variable , the expected value is the theoretical probability of success, and

2553-664: The average to converge almost surely on something (this can be considered another statement of the strong law), it is necessary that they have an expected value (and then of course the average will converge almost surely on that). If the summands are independent but not identically distributed, then provided that each X k has a finite second moment and ∑ k = 1 ∞ 1 k 2 Var ⁡ [ X k ] < ∞ . {\displaystyle \sum _{k=1}^{\infty }{\frac {1}{k^{2}}}\operatorname {Var} [X_{k}]<\infty .} This statement

2622-414: The convergence is only weak (in probability). See differences between the weak law and the strong law . The strong law applies to independent identically distributed random variables having an expected value (like the weak law). This was proved by Kolmogorov in 1930. It can also apply in other cases. Kolmogorov also showed, in 1933, that if the variables are independent and identically distributed, then for

2691-459: The estimator for I is given by Q N ≡ 1 N ∑ i = 1 N f ( x ¯ i ) p ( x ¯ i ) {\displaystyle Q_{N}\equiv {\frac {1}{N}}\sum _{i=1}^{N}{\frac {f({\overline {\mathbf {x} }}_{i})}{p({\overline {\mathbf {x} }}_{i})}}} Intuitively, this says that if we pick

2760-695: The exact distribution by making a number of passes over the integration region which creates the histogram of the function f . Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution. In order to avoid the number of histogram bins growing like K , the probability distribution is approximated by a separable function: g ( x 1 , x 2 , … ) = g 1 ( x 1 ) g 2 ( x 2 ) … {\displaystyle g(x_{1},x_{2},\ldots )=g_{1}(x_{1})g_{2}(x_{2})\ldots } so that

2829-447: The fact that almost all higher-dimensional integrands are very localized and only small subspace notably contributes to the integral. A large part of the Monte Carlo literature is dedicated in developing strategies to improve the error estimates. In particular, stratified sampling—dividing the region in sub-domains—and importance sampling—sampling from non-uniform distributions—are two examples of such techniques. A paradigmatic example of

MANIAC I - Misplaced Pages Continue

2898-490: The figure on the right, the relative error Q N − π π {\displaystyle {\tfrac {Q_{N}-\pi }{\pi }}} is measured as a function of N , confirming the 1 N {\displaystyle {\tfrac {1}{\sqrt {N}}}} . Keep in mind that a true random number generator should be used. Made in Python . The code below describes

2967-413: The illustration. The popular MISER routine implements a similar algorithm. The MISER algorithm is based on recursive stratified sampling . This technique aims to reduce the overall integration error by concentrating integration points in the regions of highest variance. The idea of stratified sampling begins with the observation that for two disjoint regions a and b with Monte Carlo estimates of

3036-415: The inequality | X ¯ n − μ | < ε {\displaystyle |{\overline {X}}_{n}-\mu |<\varepsilon } holds for all large enough n , since the convergence is not necessarily uniform on the set where it holds. The strong law does not hold in the following cases, but the weak law does. There are extensions of

3105-413: The integral E a ( f ) {\displaystyle E_{a}(f)} and E b ( f ) {\displaystyle E_{b}(f)} and variances σ a 2 ( f ) {\displaystyle \sigma _{a}^{2}(f)} and σ b 2 ( f ) {\displaystyle \sigma _{b}^{2}(f)} ,

3174-411: The intuitive interpretation of the expected value (for Lebesgue integration only) of a random variable when sampled repeatedly as the "long-term average". Law 3 is called the strong law because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability). However the weak law is known to hold in certain conditions where the strong law does not hold and then

3243-411: The law of large numbers to collections of estimators, where the convergence is uniform over the collection; thus the name uniform law of large numbers . Suppose f ( x , θ ) is some function defined for θ ∈ Θ, and continuous in θ . Then for any fixed θ , the sequence { f ( X 1 , θ ), f ( X 2 , θ ), ...} will be a sequence of independent and identically distributed random variables, such that

3312-425: The law state that the sample average X ¯ n = 1 n ( X 1 + ⋯ + X n ) {\displaystyle {\overline {X}}_{n}={\frac {1}{n}}(X_{1}+\cdots +X_{n})} converges to the expected value: (Lebesgue integrability of X j means that the expected value E( X j ) exists according to Lebesgue integration and

3381-459: The number of bins required is only Kd . This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes. The efficiency of VEGAS depends on the validity of this assumption. It is most efficient when the peaks of the integrand are well-localized. If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS. VEGAS incorporates

3450-491: The number of flips becomes large. Also, almost surely the ratio of the absolute difference to the number of flips will approach zero. Intuitively, the expected difference grows, but at a slower rate than the number of flips. Another good example of the LLN is the Monte Carlo method . These methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The larger

3519-507: The number of repetitions, the better the approximation tends to be. The reason that this method is important is mainly that, sometimes, it is difficult or impossible to use other approaches. The average of the results obtained from a large number of trials may fail to converge in some cases. For instance, the average of n results taken from the Cauchy distribution or some Pareto distributions (α<1) will not converge as n becomes larger;

SECTION 50

#1732779925099

3588-420: The number of sub-volumes grows far too quickly to keep track. Instead one estimates along which dimension a subdivision should bring the most dividends and only subdivides the volume along this dimension. The stratified sampling algorithm concentrates the sampling points in the regions where the variance of the function is largest thus reducing the grand variance and making the sampling more effective, as shown on

3657-573: The proofs. This assumption of finite variance is not necessary . Large or infinite variance will make the convergence slower, but the LLN holds anyway. Mutual independence of the random variables can be replaced by pairwise independence or exchangeability in both versions of the law. The difference between the strong and the weak version is concerned with the mode of convergence being asserted. For interpretation of these modes, see Convergence of random variables . The weak law of large numbers (also called Khinchin 's law) states that given

3726-415: The proportion of heads after n flips will almost surely converge to 1 ⁄ 2 as n approaches infinity. Although the proportion of heads (and tails) approaches 1 ⁄ 2 , almost surely the absolute difference in the number of heads and tails will become large as the number of flips becomes large. That is, the probability that the absolute difference is a small number approaches zero as

3795-577: The reason is heavy tails . The Cauchy distribution and the Pareto distribution represent two cases: the Cauchy distribution does not have an expectation, whereas the expectation of the Pareto distribution ( α <1) is infinite. One way to generate the Cauchy-distributed example is where the random numbers equal the tangent of an angle uniformly distributed between −90° and +90°. The median

3864-857: The sample mean of this sequence converges in probability to E[ f ( X , θ )]. This is the pointwise (in θ ) convergence. A particular example of a uniform law of large numbers states the conditions under which the convergence happens uniformly in θ . If Then E[ f ( X , θ )] is continuous in θ , and sup θ ∈ Θ ‖ 1 n ∑ i = 1 n f ( X i , θ ) − E ⁡ [ f ( X , θ ) ] ‖ → P   0. {\displaystyle \sup _{\theta \in \Theta }\left\|{\frac {1}{n}}\sum _{i=1}^{n}f(X_{i},\theta )-\operatorname {E} [f(X,\theta )]\right\|{\overset {\mathrm {P} }{\rightarrow }}\ 0.} This result

3933-442: The sequence { E ( σ 1 2 ) , E ( σ 2 2 ) , E ( σ 3 2 ) , … } {\displaystyle \left\{\mathrm {E} (\sigma _{1}^{2}),\mathrm {E} (\sigma _{2}^{2}),\mathrm {E} (\sigma _{3}^{2}),\ldots \right\}} is bounded due to being identically equal to Var(f) , as long as this

4002-445: The series consists of independent identically distributed random variables, it suffices that the expected value exists for the weak law of large numbers to be true. These further studies have given rise to two prominent forms of the LLN. One is called the "weak" law and the other the "strong" law, in reference to two different modes of convergence of the cumulative sample means to the expected value; in particular, as explained below,

4071-412: The series, keeping the expected value constant. If the variances are bounded, then the law applies, as shown by Chebyshev as early as 1867. (If the expected values change during the series, then we can simply apply the law to the average deviation from the respective expected values. The law then states that this converges in probability to zero.) In fact, Chebyshev's proof works so long as the variance of

4140-637: The set Ω = [−1,1] × [−1,1] with V = 4. Notice that I π = ∫ Ω H ( x , y ) d x d y = π . {\displaystyle I_{\pi }=\int _{\Omega }H(x,y)dxdy=\pi .} Thus, a crude way of calculating the value of π with Monte Carlo integration is to pick N random numbers on Ω and compute Q N = 4 1 N ∑ i = 1 N H ( x i , y i ) {\displaystyle Q_{N}=4{\frac {1}{N}}\sum _{i=1}^{N}H(x_{i},y_{i})} In

4209-461: The strong form implies the weak. There are two different versions of the law of large numbers that are described below. They are called the strong law of large numbers and the weak law of large numbers . Stated for the case where X 1 , X 2 , ... is an infinite sequence of independent and identically distributed (i.i.d.) Lebesgue integrable random variables with expected value E( X 1 ) = E( X 2 ) = ... = μ , both versions of

SECTION 60

#1732779925099

4278-450: The sub-regions using the formula for N a and N b . This recursive allocation of integration points continues down to a user-specified depth where each sub-region is integrated using a plain Monte Carlo estimate. These individual values and their error estimates are then combined upwards to give an overall result and an estimate of its error. There are a variety of importance sampling algorithms, such as Importance sampling provides

4347-402: The variance Var( f ) of the combined estimate E ( f ) = 1 2 ( E a ( f ) + E b ( f ) ) {\displaystyle E(f)={\tfrac {1}{2}}\left(E_{a}(f)+E_{b}(f)\right)} is given by, V a r ( f ) = σ a 2 ( f ) 4 N

4416-789: The variance of the average of n random variables is Var ⁡ ( X ¯ n ) = Var ⁡ ( 1 n ( X 1 + ⋯ + X n ) ) = 1 n 2 Var ⁡ ( X 1 + ⋯ + X n ) = n σ 2 n 2 = σ 2 n . {\displaystyle \operatorname {Var} ({\overline {X}}_{n})=\operatorname {Var} ({\tfrac {1}{n}}(X_{1}+\cdots +X_{n}))={\frac {1}{n^{2}}}\operatorname {Var} (X_{1}+\cdots +X_{n})={\frac {n\sigma ^{2}}{n^{2}}}={\frac {\sigma ^{2}}{n}}.} which can be used to shorten and simplify

4485-494: The weak law applying even though the expected value does not exist. The strong law of large numbers (also called Kolmogorov 's law) states that the sample average converges almost surely to the expected value That is, Pr ( lim n → ∞ X ¯ n = μ ) = 1. {\displaystyle \Pr \!\left(\lim _{n\to \infty }{\overline {X}}_{n}=\mu \right)=1.} What this means

4554-455: The weak law states that for any nonzero margin specified ( ε ), no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value; that is, within the margin. As mentioned earlier, the weak law applies in the case of i.i.d. random variables, but it also applies in some other cases. For example, the variance may be different for each random variable in

4623-500: Was featured in at least two UNM Maniac programming dissertations from 1963. It remained in operation until it was retired in 1965. It was succeeded by MANIAC II in 1957. A third version MANIAC III was built at the Institute for Computer Research at the University of Chicago in 1964. Monte Carlo integration In mathematics , Monte Carlo integration is a technique for numerical integration using random numbers . It

4692-454: Was known under both names, but the "law of large numbers" is most frequently used. After Bernoulli and Poisson published their efforts, other mathematicians also contributed to refinement of the law, including Chebyshev , Markov , Borel , Cantelli , Kolmogorov and Khinchin . Markov showed that the law can apply to a random variable that does not have a finite variance under some other weaker assumption, and Khinchin showed in 1929 that if

4761-430: Was published in his Ars Conjectandi ( The Art of Conjecturing ) in 1713. He named this his "Golden Theorem" but it became generally known as " Bernoulli's theorem ". This should not be confused with Bernoulli's principle , named after Jacob Bernoulli's nephew Daniel Bernoulli . In 1837, S. D. Poisson further described it under the name "la loi des grands nombres" ("the law of large numbers"). Thereafter, it

#98901