Main content

## Statistics and probability

### Course: Statistics and probability > Unit 11

Lesson 3: Estimating a population mean- Introduction to t statistics
- Simulation showing value of t statistic
- Conditions for valid t intervals
- Reference: Conditions for inference on a mean
- Conditions for a t interval for a mean
- Example finding critical t value
- Finding the critical value t* for a desired confidence level
- Example constructing a t interval for a mean
- Calculating a t interval for a mean
- Confidence interval for a mean with paired data
- Making a t interval for paired data
- Interpreting a confidence interval for a mean
- Sample size for a given margin of error for a mean
- Sample size and margin of error in a confidence interval for a mean

© 2023 Khan AcademyTerms of usePrivacy PolicyCookie Notice

# Simulation showing value of t statistic

See why we use t statistics when building confidence intervals for a mean using the sample standard deviation in place of the population standard deviation.

## Want to join the conversation?

- How can I get this simulation?

Can someone provide a link, please.(6 votes)- Did some digging! The subtitles incorrectly show Charlotte Allen as the user, while it actually is Charlotte Auen. Found by searching Charlotte Auen, then going to her projects page. Link below!

https://www.khanacademy.org/computer-programming/confidence-intervals-about-a-mean/6334175176491008(18 votes)

- Why we have to use the t table to calculate the confidence interval of the mean but the z table to the proportion? Is there any difference?(13 votes)
- when using "z with s", we have got a hit rate of approx 92%, which is not that much different to the hit rate of 95% when using "z with sigma", so is the difference really significant?(4 votes)
- Okay, so is there an intuitive explanation why it is better to use T-statistic instead of Z? I have been thinking about this for a long time because I cannot recall this being explained at my lectures. If you have some neat answer for this, please share!(3 votes)
- I think it's that because you have to account for the variability of the sample standard deviation, which the t-model accounts for by having wider tails, which increases the confidence interval, which kind of makes sense because there is more variability from having less samples.(3 votes)

- If using a T-table and a sample standard deviation is always preferable over using a Z-table and the sample standard deviation then why even teach the method using the z-table?(3 votes)
- How can we find and experiment with this simulation?(2 votes)
- which is better "t and s" or "Z and sigma" ?(2 votes)
- Why does using z with sample standard deviation have a lower capture rate than using t with sample standard deviation?(1 vote)

## Video transcript

- In a previous video
we talked about trying to estimate a population
mean with a sample mean and then constructing
a confidence interval about that sample mean. And we talked about different scenarios. We could use a Z table plus the true population standard deviation, and that actually would construct pretty valid confidence intervals, but the problem is you don't know the population standard deviation. And so you might try to us a Z table to find your critical values plus the sample standard deviation, but what we talked about
is this doesn't actually do a good job of calculating
our confidence intervals, and we're gonna see that
experimentally in a few seconds. And so instead we have
something called a t-statistic where if we want our critical
value we use a t-table instead of a z-table. And then we use that in conjunction with our sample standard deviation, and then all of a sudden
we are actually going to have pretty good confidence intervals. To make this a little bit more real, let's look at a simulation. So this is a scratch pad on Khan Academy, made by Khan Academy user Charlotte Allen. And the whole point there is to see what our confidence intervals look like with these different scenarios. So let's say we have a true
population mean of 2.0, let's say it the average number, the mean number of
apples people eat a day. The true population mean
is two, that seems high, but maybe it's in a certain country that has a lot of apples. And let's say that we
know that the population standard deviation is 0.5. And we're going to create
confidence intervals with a goal of having a 95
percent confidence level, and we're gonna take sample sizes of 12. So first, we can construct
our confidence intervals using z and sigma, which is
a legitimate way to do it. And so let's just draw
a bunch of samples here. And so we do see that it looks like it is roughly 95 percent. When we keep making these
samples, and constructing these confidence
intervals, that 95 percent of the time these confidence
intervals contain our true population mean. So these look like a good
confidence intervals, but what we've talked
about is normally when your doing this type of thing, this type of inferential statistics, you don't know the population
standard deviation. You don't know sigma. So
instead, you might be tempted to use z with our sample standard deviations. But if you look at that for
these exact same samples we just calculated,
notice, now when we did it over and over again,
we've done this 625 times, in this scenario, where
we keep calculating the confidence intervals with z
and s, the true population mean is contained in the intervals only 92.2 percent of the the time. Now we could keep going. So we have a much lower
hit rate than we would hope to have if we were
actually using z and sigma. Now what's neat is, if
we use t, use a t-table, notice, this is getting much
closer, and this is neat! Because with a t-table, and
something we can actually get from the sample, the
sample standard deviation, we're actually able to have a pretty close hit rate to what we would have if we actually knew the
population standard deviation. So that's the value of t and t-statistics, and we're going to give
more and more examples, including using a
t-table in future videos.