Some examples:

1) If I flip a coin 10,000 times, what is a reasonable range for the number of heads I should expect to get?

This is a binomial distribution with a number of trials N=10,000, and a probability to win on one trial of p=1/2. The average number of heads (wins) is therefore

         (since for a binomial distribution, )

        = (10000)(1/2) = 5000

The standard deviation is

       (since for a binomial distribution, )


We have a 95% chance that our result will be within 2 standard deviations of the average, so we can now say that with 95% certainty the number of heads one expects to get in 10,000 flips should lie within the range

5000 ± (2)(50) = 5000 ± 100

or equivalently within the range 4,900 to 5,100.

2) Suppose we flip a coin N times and we find that we get n heads. We can take the fraction n/N as an experimental "measurement" of the probability p that the coin comes up heads on any single flip. How many times N should we flip in order to be 99% sure that we have measured the probability p to within a relative accuracy of 1%? We keep p an arbitrary number in this calculation, allowing for the possibility that the coin is not fair.

We will denote the true probability to get a head in one flip as p.

We will denote our measurement of this probability in the N flip experiment as "p" = n/N.

The quotes remind us that the measured "p" need not be exactly equal to p, since in any N flips we need not get the exact average of Np heads -- we might get fewer or more due to statistical fluctuations.

As in (1), the average number of heads in N flips is .

The standard deviation is .

We know that the number of heads n found in N flips, will with 99% certainty, lie within 2.6 standard deviations of the average, so

with 99% certainty n is in the range: 

Since our measurement of the probability "p" is given by "p" = n/N, we have,

with 99% certainty "p" is in the range: 

The first term is the average value of "p", and the second term is the largest (to 99% certainty) absolute error we are likely to find in any given experiment of N flips.

The relative error is the absolute error divided by the average, so,

with 99% certainty the largest relative error in "p" is: 

We want a relative accuracy of 1%, so the relative error can be at most 0.01, therefore

,   or,  ,   so,

For a fair coin, p=1/2, and the factor in the parenthesis is equal to unity. For a fair coin we therefore need to flip N=67,600 to be 99% certain that the number of heads found, n, divided by the number of flips, N, is within 1% of the true probability to flip a head, p=1/2.

For N=67,600 flips, the expected number of heads, with 99% certainty, will lie in the range

So the number of heads that appears in 67,600 flips is, with 99% certainty, within the range from 33,560 to 34,039. If we actually did this experiment of flipping 67,600 times, and found that the number of heads did not lie in this range, we would strongly suspect that the coin is not fair.

Note that if p is less than one, i.e. the coin is not fair, then the factor (1-p)/p is larger than one, and we would need to make a larger number N of flips to get the desired accuracy; if p was greater than one, then the factor (1-p)/p is smaller than one, and we would only need a smaller number N.

3) Suppose you are on a desert island, and your rescue depends on your being able to calculate the number  to 3 decimal places. All you have at your disposal is a square dart board that looks as follows:

as well as a large supply of darts and a large bottle of whisky. How would you calculate ?


You drink a sufficient amount of whisky, so that the accuracy of your aim is completely destroyed, and then you start throwing the darts at the dart board. Assuming each dart lands at a random point on the dart board, lets compute the probability that a given dart will land inside the circle.

If the length of the side of the dart board is L, its area is . The inscribed circle has a radius L/2, so its area is . Hence the fraction of the area of the dart board covered by the circle is . Since the dart is equally likely to land any where on the dart board, the probability that it lands within the circle is therefore .

If we now throw a large number of darts N, the number n which land inside the circle would on average be  <n>=. So the measurement of n, then gives a "measurement" of the probability "p" = n/N, from which we can then determine .

The reasoning here is eactly the same as in example (2) above. In both cases the process is binomial (win = dart in circle = head; lose = dart out of circle = tail) and we are seeking to measure the probability p for a "win" in a single play. Thus, as in (2), we have with 99% certainty that the number of darts n that land inside the circle will lie in the range,

n in range: 

and our measured value "p" will lie in the range,

"p" = n/N  in range: 

Knowing "p", we can then compute our measured value of . Since , we have , so our measured value of  lies, with 99% certainty, in the range,

= 4 "p"  in range: 

The maximum likely (to 99% certainly) relative errror in our measurement of  is therefore

If we want  to 3 decimal places, i.e.  =, we are seeking an absolute error of no more than 0.0005, or equivalently a relative error of no more than

or 0.01%. How large must N be so that the relative error is so small?

We want,


Of course to compute the value of N above, you need to know the value of  = 3.14159. But if you don't remember this, all you need to use is some estimate for  that is smaller than the true value, and the above will give an N that is a bit bigger than you really need. If we therefore estimate  = 3, we should throw the darts

times! Assuming that you threw darts at the rate of one per second, this would take you 7.14 years. So I hope you do not plan on being rescued any time soon!

But if you had a computer which can throw darts (i.e. pick random numbers) at a rate of  per second (this is a typical value for a modern computer workstation such as we have here in the Physics Department), you could estimate p to the desired accuracy in about 17 minutes!

What we have just describe above can be viewed as a way to compute the area of a circle using random methods. We could have divided the dart board into finer and finer divisions of many many tiny squares, and then count one by one the number of squares which are contained within the circle - this would be the direct way to compute the area. Instead of this we pick points randomly and determine the area of the circle by the fraction of the random points which lie inside the circle. Such a technique for computing the area of any shape is know as the Monte Carlo method. It has very wide application in physics to computing all sorts of things (not just areas of shapes!) that are very hard to compute by more direct means.