If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## High school statistics

### Course: High school statistics>Unit 6

Lesson 4: Probability from simulations

# Statistical significance of experiment

Sal determines if the results of an experiment about advertising are statistically significant. Created by Sal Khan.

## Want to join the conversation?

• This is more a heads up than a question. It may be just me but, when I first watched this video, I was utterly confused and misled by it. The conclusion that the experiment was significant seemed to me - and to others in the comments, I see - to contradict the point that had been made throughout the video. I couldn't make sense of how results that appeared to be not significant should, just the opposite, be considered significant. It turns out I had it backwards because it wasn't at all clear to me what a re-randomization is! A re-randomization is all about mixing up at random the results of one and the same experiment. Imagine reshuffling the pages of the Encyclopedia Britannica many times, to see how often they just happen to be found in the proper order. If a specific pattern observed in an experiment emerges often from a re-randomization, this suggests that pure chance alone is often sufficient to produce the same pattern, thus invalidating the experiment. Conversely, the less often a pattern emerges from a re-randomization of the results, the more the results of the experiment are validated (the point being that chance alone proves unlikely to produce said results). The notion and the underlying logic of a re-randomization are better explained in the next video, here:
In short, if this video confuses you, it might make more sense to you after you watch the next. Hope this helps. • What do you mean when you say "significant" ? Is it important study? I don't understand. Please give couple very simple examples about it. • Lets assume for a moment that watching food commercials makes the kids eat more. Now if we take a treatment group and a control group and perform the experiment a large number of times, shouldn't the mean of treatment group be higher than that of control group in most of the cases?
in my opinion, It should be so if we distribute the kids randomly in treatment and control groups everytime we perform the experiment.

As an analogy, we know a die is biased if it shows the same number most of the times when we throw it, similarly if there is a diff between means of treatment and control groups in most of the trials we should say that it is due to the type of commercials. • Dr C, I've read all your answers in this topic explaining the intricacies of re-randomization but it seems to me that I have a brain freeze even more after those answers... to me the situation is rather illogical... imagine, for example, somebody takes two groups - 250 zebras and 250 lions - for study, and gives both of the groups grass (or whatever zebras eat) and meat in the same proportion, and after observation it appears that the zebras have eaten only grass and the lions - only meat... than some statistician says "What the hell! Let's re-randomize this..." and makes 150 groups of zebras and lions just to show that there is no difference between their consumption of grass and meat... putting that to an extreme the statistician posits that there is no difference between the eating habits of zebras and lions... of course, this example is a huge amplification but it seems to me that it's from the same category... so, I still don't understand the aim of this video lesson though I had almost no problems with any on them on this site... and, of course, I can - and should be - wrong somewhere but just don't know where...
(1 vote)
• I have a question. I've completely understood why we need to re-randomize the datapoints between the groups and recalculate our indicator (the difference between the means), to see how often it can happen that we get our result just by chance. However, now I found myself wondering: how many simulations are needed to correctly estimate the probability of getting a specific value for the indicator just by chance? In other words, how do we know that 150 simulations are enough? Ideally we would want to recalculate the indicator for any possible arrangment of the datapoints in the two groups. But that would be 500 choose 250 in our example if I'm not mistaken, which is 10E+149 combinations ! So how do we know that 150 simulations are enough? • You're absolutely right. One datasets become moderately large, it's not really possible to run all of the permutations. Instead, we randomly select a larger number of permutations. The probability (p-value) we calculate is than an estimate of the actual p-value we would have gotten if we ran all of the permutations. One thing that we can then do is to make a confidence interval for the probability, which will depend on the estimated probability, as well as the number of simulations we ran. Once we run enough simulations, this confidence interval will be pretty narrow, so we'll be fairly confident in our result to the degree that we need.

So how many replications do we really need? It depends on what the p-value is and how certain we need to be. Whenever I do tests of this nature, I usually start off with 1000 or 10000, depending on the complexity of the algorithm - some codes run very fast, others take a bit longer.
• We should care about these two points, not just one of them, shouldn't we ? • At , Sal looks at the probability that there would be a 10 gram difference. At , he says that there is a 2/150 probability that the results are due to chance, indicating that he is adding the data point from 10 and the point from -10. This makes sense, as they both are indicative of a 10 point difference (regardless of whether it is +10 or -10). But in one of the problems (the autism/diet one), one of the hints only includes the data points from the positive side.

So, if the result of THIS experiment was, in fact, an 8 gram difference, would it have been significant? Would it be a 4/150 (approx 2.7%) chance -- adding from the negative side -- or a 9/150 (6%) -- adding from both sides -- chance that the results are insignificant? • Since it's not entirely clear which way they did the subtraction, my recommendation would be to go with the 2-sided test: meaning we add from both directions, so we'd get 9, and hence the probability of an 8g difference assuming the two groups have no difference is 6%.

If the question had given us an indication of direction, we could have used that and gotten 4 or 5 (2.7% or 3.3%) instead. That's certainly legitimate, but the television and snacking example doesn't give us that.
• I don't understand why, during the randomization, Sal creates new groups that have a mix of both kids who saw food videos and game videos.

If we wanted to be significant, shouldn't we have repeated the experiment 1000 times, keep the kids who watched the food commercials in one group and the others in their group, in order to be able to compare the means?

I feel like we're comparing apples with pears which doesn't make sense. • It's because we are assuming that the two treatment groups - watching a food commercial or a non-food commercial - has no effect on how much a person eats. Under that assumption, the groups are equivalent, it's only random chance that we got the results we did, and any random shuffling of the kids among the two groups would be equally as likely of an outcome.

Hence, we perform that shuffling, and calculate the difference between the groups each time. In doing so, we get a whole distribution of these differences.

If our assumption is true, then the actual difference that we observed (wasn't it 8 or 10 or something like that?) should be somewhere in the middle of that distribution. It doesn't have to be in the exact middle, but close enough that we don't think the observed result would be unreasonable. On the other hand, if the observed difference is out in the tails of this distribution of differences based on the shuffling, then we would think that our assumption is a very poor one, and therefore we would conclude that the type of commercial does influence how much someone eats.   