If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Statistics and probability

### Course: Statistics and probability>Unit 9

Lesson 9: Poisson distribution

# Poisson process 2

More of the derivation of the Poisson Distribution. Created by Sal Khan.

## Want to join the conversation?

• So I understand that if we know we have a binomial distribution, our expected value for any number of trials is np, but how do we know that some of these things even follow a binomial distribution? How do we know that the number of cars passing by in an hour is distributed binomially? Is it because, given any time interval, a car either does or does not pass, and there are no other options?

Edit: I think I understand it...if we separate the hour into an infinite number of intervals, even though the probability of a car passing in any single one of those intervals is infinitely low, over the course of an hour there will, on average, be a certain number of exact moments where a car DID pass by, AKA our expected value. • Sometimes you know that things cannot be binomial because you know how they work. For instance if you are counting train carriages instead of cars you couldn't use binomial or poisson distribution because carriages don't come along at purely random intervals - they come in groups call trains.

The big picture here though is that in these early videos, Sal is presenting statistics to describe stuff. Armed with that knowledge, in the later videos he will create statistical tests for whether hypotheses are true. For example you can test the hypothesis that the arrival of cars follows a binomial probability distribution. The test I know of is called a Goodness of Fit test & Sal talks about it in a later video.
• Does this make sense for low n? I.e. if there are 9 cars that pass an hour then we use lambda = 9. We say this is equivalent to n*p, where n is the number and trials and p is the probability of a car passing in the number of trials. So say we only do 1 trial, then p = 9. This means there's a 900% chance of a car passing in one hour, lamba = n*p seems to make a little more sense when n is large (resulting in p less than 1). So is lambda = np a big assumption we make? • why do we need the poisson process?? O-o I am facing problems to identify a problem if it follows binomial or poisson or hypergeometric distribution.i get the question,have all the datas but cant find out which distribution i have to follow.are there any ways to identify them? • why not that we could not conclude that when n approaches infinity, (1 + a/n) ^ n is just 1 (rather than e^a)? • You are able to build all the bridges from our ignorance to your knowledge, and step by step you allow our minds to cross. You are truly great! Thank you for doing what you do best and sharing it! • Hi Sal... at you simplified lim<n->∞> (1-λ/n)^n as e^-λ and lim(n->∞> (1-λ/n)^-k as 1. Question is when you could simplify (1-λ/n) as (1-0) as n approaches ∞ in lim(n->∞)(1-λ/n)^-k.. why did not you simplify the lim(n->∞)(1-λ/n)^n as (1-0)^n which would also be 1 as n approaches ∞. which would have left us with further simplified value of (λ^k/k!) ? • Right, so if n= success per smaller interval
and we want to work out as n -> infinity then surely that means n happens an infinite amount of times in the interval?

and also how can x!/(x-k)!= x(x-1)(x-2)...[x-(k+1)]

if we did x=7 and k=3 that means its 7!/(7-3)!=210 7(7-1)(7-2)(7-3)[7-(3+1)] = 3024 how can they be equal?   