If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Confidence interval simulation

Confidence interval simulation.

Want to join the conversation?

Video transcript

- [Instructor] The goal of this video is to use this scratch pad on Khan Academy that was written by Khan Academy user Charlotte Allen, in order to get a better intuitive sense of confidence intervals. So here we're dealing with a gumball machine where a certain proportion of the gumballs are going to be green. And so let's say we can set that on it. Let's make that 60% of the gumballs are green. But let's say someone else comes along and they don't actually know the proportion of gumballs that are green, but they can take samples. And so let's say they take samples of 50 at a time, and so they draw a sample. The sample proportion right over here, actually just happened to be 0.6, but then they could draw another sample. This time the sample proportion is 0.52 or 52% of those 50 gumballs happened to be green. Now you could say, all right, well, these are all different estimates. But for any given estimate, what, how confident are we that the, a certain range around that estimate actually contains the true population proportion? And so if we look at this tab right over here, that's what confidence intervals are good for. And in a previous video, we talked about how you calculate the confidence interval. What we wanna do is say, well, there's a 95% chance, and we get that from this confidence level, and 95% is the confidence level people typically use. And so there's a 95% chance that whatever our sample proportion is that it's within two standard deviations of the true proportion. Or that the true proportion is going to be contained in an interval that are two standard deviations on either side of our sample proportion. Well if you don't know the true proportion, the way that you estimate the standard deviation is with a standard error, which we've done in previous videos. And so this is two standard errors to the right and two standard errors to the left of our sample proportion. And our confidence interval is this entire interval, going from this left point to this right point. And as we draw more samples, you can see it's not obvious, but our intervals change depending on what our actual sample proportion is. Because we use our sample proportion to calculate our confidence interval because we're assuming whoever's doing the sampling does not actually know the true population proportion. Now what's interesting here, about this simulation, is that we can see what percentage of the time does our confidence interval, does it actually contain the true parameter? So let me just draw out 25 samples at a time. And so you can see here that right now, 93% of our, for 93% of our samples, did our confidence interval actually contain our population parameter. And we can keep sampling over here and we can see the more samples that we take, it really is approaching that close to 95% of the time, our confidence interval does indeed contain the true parameter. And so once again, we did all that math in the previous video, but here you can see that confidence intervals calculated the way that we calculated them, actually do a pretty good job of what they claim to do. That if we calculate a confidence interval based on a confidence level of 95%, that it is indeed the case that roughly 95% of the time, the true parameter, the population proportion will be contained in that interval. And I could just draw more and more and more samples and we can actually see that happening. Every now and then, for sure, you get a sample where even when you calculate your confidence interval, the true parameter, the true population proportion is not contained. But that is the exception, that happens very infrequently. 95% of the time, your true population parameter is contained in that interval. Now another interesting thing to see is, if we increase our sample size, our confidence interval is going to get narrower. So if we increase our sample size, we'll just make it 200. Now let's draw some samples. Notice, now our confidence intervals are narrower, but still because our confidence level, which was used to calculate these intervals, is still 95%, when we draw a bunch of samples, we are still going to get roughly 95% of the time our confidence intervals contain our true population proportion. But roughly 5% of the time, they don't.