- Statistics

back | next

7.2 The Central Limit Theorem

In the previous section we first saw that we can use frequency distributions to compute probabilities of various events. Then we saw that we could use various normal distributions as a shortcut to compute those probabilities, which was very convenient. Using that technique we were able to compute all kinds of probabilities just based on the fact that we computed a sample mean and sample standard deviation, and then assumed, more or less, that the (unknown) distribution of the variable in question was normal, more or less, with the computed mean and standard deviation as the right parameters.

But this works if we assume the original distribution is (approximately) normal, so what we are hoping for is some mathematical justification that says, in effect, that most distributions - in some sense - are "normal". Such a theorem does indeed exist, and it is one of the corner-stones of statistics: the Central Limit Theorem. It has many practical and theoretical implications, such as it will provide us with a theoretical justification of using a normal distribution to compute certain probabilities.

In this course we will simply state the theorem without any proof. In more advanced courses we would provide a justification and/or mathematical proof of the theorem, but for our current purposes it will be enough to understand the theorem and to apply it in subsequent chapters. If we want to talk colloquially, we have actually already seen the Central Limit Theorem - in the previous chapter we noted: most histograms are (more or less) bell-shaped, which is in fact one way to state the Central Limit Theorem:

Central Limit Theorem, colloquial version 1

Most histograms (frequency distributions) are normal

To state this theorem precisely, we need to specify, among other things, exactly which normal distribution we are talking about, and under what circumstances we are considering samples.

Central Limit Theorem for Means

Suppose x is a variable for a population whose distribution has a mean m and standard deviation s, but whose shape is unknown. Suppose further we repeatedly select random samples of size N from that population and compute the sample mean each time we do this. Finally, we plot the distribution (histogram) of all these sample means.

Then the conclusion is that the distribution of all sample means is a normal distribution (bell shaped) with mean m (the original mean) and standard deviation s / sqrt(N)

This theorem is perhaps somewhat hard to understand, so here is a more colloquial restatement of the theorm.

Central Limit Theorem, colloquial version 2

No matter what shape a distribution for a population has, the distribution of means computed for samples of size N is bell shaped (normal). Moreover, if we know the mean and standard deviation of the original distribution, the mean for the sample means will be the same as the original one, while the new standard deviation will be the original one divided by the square root of N (the sample size).

The importance of this theorem is that it allows us to start with an arbitrary distribution, yet use the normal distribution with appropriate mean and standard deviation, to perform various computations. Since Excel contains the NORMDIST function, we can therefore compute probabilities for many distributions, regardless of whether they are normally distributed or not.

For the record, there is an additional Central Limit Theorem for taking sums of samples, but we will not need that in our discussions.