Main content
AP®︎/College Statistics
Course: AP®︎/College Statistics > Unit 10
Lesson 4: Setting up a test for a population proportion- Constructing hypotheses for a significance test about a proportion
- Writing hypotheses for a test about a proportion
- Conditions for a z test about a proportion
- Reference: Conditions for inference on a proportion
- Conditions for a z test about a proportion
© 2023 Khan AcademyTerms of usePrivacy PolicyCookie Notice
Reference: Conditions for inference on a proportion
When we want to carry out inferences on one proportion (build a confidence interval or do a significance test), the accuracy of our methods depend on a few conditions. Before doing the actual computations of the interval or test, it's important to check whether or not these conditions have been met, otherwise the calculations and conclusions that follow aren't actually valid.
The conditions we need for inference on one proportion are:
- Random: The data needs to come from a random sample or randomized experiment.
- Normal: The sampling distribution of
needs to be approximately normal — needs at least expected successes and expected failures. - Independent: Individual observations need to be independent. If sampling without replacement, our sample size shouldn't be more than
of the population.
Let's look at each of these conditions a little more in-depth.
The random condition
Random samples give us unbiased data from a population. When samples aren't randomly selected, the data usually has some form of bias, so using data that wasn't randomly selected to make inferences about its population can be risky.
More specifically, sample proportions are unbiased estimators of their population proportion. For example, if we have a bag of candy where of the candies are orange and we take random samples from the bag, some will have more than orange and some will have less. But on average, the proportion of orange candies in each sample will equal . We write this property as , which holds true as long as our sample is random.
This won't necessarily happen if our sample isn't randomly selected though. Biased samples lead to inaccurate results, so they shouldn't be used to create confidence intervals or carry out significance tests.
The normal condition
The sampling distribution of is approximately normal as long as the expected number of successes and failures are both at least . This happens when our sample size is reasonably large. The proof of this is beyond the scope of AP statistics, but our tutorial on sampling distributions can provide some intuition and verification that this condition indeed works.
So we need:
If we are building a confidence interval, we don't have a value of to plug in, so we instead count the observed number of successes and failures in the sample data to make sure they are both at least . If we are doing a significance test, we use our sample size and the hypothesized value of to calculate our expected numbers of successes and failures.
The independence condition
To use the formula for standard deviation of , we need individual observations to be independent. When we are sampling without replacement, individual observations aren't technically independent since removing each item changes the population.
But the condition says that if we sample or less of the population, we can treat individual observations as independent since removing each observation doesn't significantly change the population as we sample. For instance, if our sample size is , there should be at least members in the population.
This allows us to use the formula for standard deviation of :
In a significance test, we use the sample size and the hypothesized value of .
If we are building a confidence interval for , we don't actually know what is, so we substitute as an estimate for . When we do this, we call it the standard error of to distinguish it from the standard deviation.
So our formula for standard error of is
Want to join the conversation?
- Why don't we use the sample standard deviation for the standard error?
At the end, it says the formula for standard error ≈ sqrt(p-hat*(1-p-hat)/n). But since p-hat is a sample, why don't we use the sample standard deviation with the n-1 correction to estimate the true standard deviation of the sample distribution? Shouldn't it be sqrt(p-hat*(1-p-hat)/n-1)?(18 votes)- The appearance of n in the expression for the standard deviation for p-hat is not due to sampling, but due to the number of trials n for the Binomial random variable X~B(n,p), where n is the number of trials and p is the probability of a success in any given trial.
Unfortunately, in this context, the letter p is used for both the probability and the proportion.
So, the random variable p-hat is actually a scaling, by 1/n, of the Binomial random variable X~B(n,p). That is, p-hat = B(n,p)/n. That's how we get the proportion of successes - divide the number of successes, X, by the number of trials, n.
So, by the properties of scaling a random variable by the factor 1/n, the expected value E(p-hat)=(1/n)E(X) and the variance V(p-hat)=(1/n^2)V(X).
Thus, the standard deviation for p-hat is given by the square root of (1/n^2)V(X)
Recall, the mean and variance for the binomial random variable are, np and np(1-p), respectively. Hence the variance for p-hat is...
V(p-hat) = np(1-p)/n^2,
so that, the standard deviation for p-hat is...
sqrt(np(1-p)/n^2) = sqrt(p(1-p)/n) as shown in the video.
Hope this helped,
with kind regards...(12 votes)
- I remember another condition where something (sample size maybe?) had to be at equal or greater than 30. What was that?(7 votes)
- It is the "large enough" condition - when we are calculating for means, we don't have a p-value, so we can't calculate np and nq. Instead, we check if n > 30. If so, it meets the large enough condition in place of the success/failure condition.(9 votes)
- Here we've approximated the standard deviation of the sample proportion by taking the formula sigma_p_hat = sqrt(p(1-p)/n) and just replacing p by p_hat to get sqrt(p_hat(1-p_hat)/n).
But in one of the videos earlier, we instead used sigma_p_hat = sigma/sqrt(n) and replaced the population standard deviation sigma with the sample standard deviation s to get s/sqrt(n).
These two formulas give different results, because s/sqrt(n) = sqrt(p_hat(1-p_hat)/(n-1)) due to the Bessel correction factor.
Which of these two approximations is best? I'm guessing the second one?(6 votes) - Can someone show me this proof for the normal condition or reference a link?
All I can find is information about the 10% rule(4 votes) - would i be able to apply this to video game stat distributions?(3 votes)
- What is the difference between the standard error of the mean(sigma^2/n) and the standard error of the sample proportion mentioned above? Thanks!(3 votes)
- Same difference. For Bernoulli distribution sigma^2 = p * (1 - p)(1 vote)
- it talks about significance test, these are yet to be explained in this course right?(3 votes)
- Isn't "not being independent" would also affect the sampling distribution of the sample mean ?(1 vote)
- Yes it would! I'm fairly sure the CLT assumes that the instances in the samples that you're taking the mean of are independent.
Also, the formula for the SD of the sampling distribution of the sample mean would not work if our instances aren't independent.(3 votes)
- If our data does not meet normality, randomness, and independence conditions for statistical inference, what is the consequence? Can you still technically make inference if you do not meet one or more of this conditions?(1 vote)
- if you dont do the conditions or it doesnt meet all the standards which would rarely ever happen in a class then its basically saying you cant carry out the significance test, confidence interval, etc(2 votes)
- Ways to of understanding the formula better?(1 vote)