If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Statistics and probability

### Course: Statistics and probability>Unit 12

Lesson 5: More significance testing videos

# Z-statistics vs. T-statistics

Sal breaks down the difference between Z-statistics and T-statistics. Created by Sal Khan.

## Want to join the conversation?

• At , you draw an arrow to the sample standard deviation and say "if this is small, specifically, less than thirty, you're going to have a T-Statistic. Shouldn't the arrow be pointed at the n? Isn't it if N is under 30? I was unaware that the standard deviation of the sample had any effect on whether or not you use a Z-Test or a T-Test .
• From the author:Yes, it should point to n, not s.
• What do you do if the number of observations is equal to 30 then would it be normal?
• MyMathLab or other math programs would probably say to use t-statistic. Because technically, it doesn't say greater than OR EQUAL TO 30, but just greater than 30.
• In a problem, how do you know when you need to use the Z chart vs the T table
• If you know the standard deviation of the population, use the z-table. If you don't but you have a large sample size (traditionally over 30, but some teachers might go up to 100 these days), then assume that the population standard deviation is the same as the sample standard deviation and use the z-table. But if you don't know the population standard deviation and have a relatively small sample size, then you use the t-table for greatest accuracy.
• Hello, I dont get it... What is the difference between Z and t statistic? It's the same formula, for both, and the graph is not different either. A clue anyone? Thanks!
• The Z-score and t-score tables themselves have different numbers in response to the fact that you can't have as much confidence in the data with a smaller sample size. You'll get a different value from Z=1.382 than t=1.382.
• Why is the mean of the t distribution zero and the mean of the z distribution equal to the population mean?
• Below is what I need to figure out:
On a test whose distribution is approximately normal with a mean of 50 and a standard deviation of 10, the results for three students were reported as follows:

Student Opie has a T-score of 60.
Student Paul has a z-score of -1.00.
Student Quincy has a z-score of +2.00.

Obtain the z-score and T-score for EACH student.
Who did better on the test?
How many standard deviation units is each score from the mean? Compare the results of the three students.
• What is the difference between a "normal" distribution and a normalized distribution?
• Good question! They are TOTALLY DIFFERENT!

A normal distribution just means the good old bell curve that you know and love. The "standard" normal distribution being the bell curve with mean 0 and stdev 1, which lets you use your Z-table.

A normalized distribution means any distribution which has a total area (or probability) under it equal to 1. So of course every probability density function (PDF) should be normalized, but sometimes you make up some new shape for a PDF (say, some function f(x)), and you are happy with the shape, but then you calculate the total area under the curve and it's, say, 13. Well, then you have to take the additional step of dividing your new function by 13, so your normalized PDF would be f(x)/13, which would now have a total area of 1 underneath.

Just to be clear, the standard normal distribution is, of course, normalized.
• The comment that for n>30 (xbar-mu)/(s/sqrt(n)) is normal is not correct. The convergence of the CLT depends on how non-normal the population distribution is. For example, consider a Bernoulli trial. The rule of thumb to use the normal approximation is that n*pi>5 and n(1-pi)>5. If pi=1%, then n must exceed 500. n=30 is not large enough.

When n>30 or so the t and the z distribution are approximately equal and textbooks stop giving percentiles of the t distribution in the tables.
• I think that for a non-binomial setting, which has more than two outcomes, and deals with averages, you can attain a probability through a Z-statistic so long as n>30. For a binomial setting, like a Bernoulli trial you state as an example, with only two outcomes, and deals with proportions, then the rule of thumb to use normal approximation is indeed np>5 and n(1-p)>5 (or in other sources they use np>10 and n(1-p)>10). So the difference in the "rules of thumb" for normal approximation depends
• I thought z= (x bar) - (mu) / (Standard dev)
You said z= (x bar) - (mu) / (Standard dev/ square root of n)

• X and X̅ are standardised slightly differently. In both cases, the denominator is the square root of the variance, like so:
For X, Z = (X-μ) / σ
For X̅, Z = (X̅ - μ) / (σ / √n)

This fits with what we know about the central limit theorem. For X, the variance is σ². For X̅, however, the variance is σ²/n, because we expect that X̅ will have a smaller variance (or tend to be closer to the mean) as n increases.