If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Course: Statistics and probability>Unit 9

Lesson 9: Poisson distribution

Poisson process 2

More of the derivation of the Poisson Distribution. Created by Sal Khan.

Want to join the conversation?

• So I understand that if we know we have a binomial distribution, our expected value for any number of trials is np, but how do we know that some of these things even follow a binomial distribution? How do we know that the number of cars passing by in an hour is distributed binomially? Is it because, given any time interval, a car either does or does not pass, and there are no other options?

Edit: I think I understand it...if we separate the hour into an infinite number of intervals, even though the probability of a car passing in any single one of those intervals is infinitely low, over the course of an hour there will, on average, be a certain number of exact moments where a car DID pass by, AKA our expected value.
• Sometimes you know that things cannot be binomial because you know how they work. For instance if you are counting train carriages instead of cars you couldn't use binomial or poisson distribution because carriages don't come along at purely random intervals - they come in groups call trains.

The big picture here though is that in these early videos, Sal is presenting statistics to describe stuff. Armed with that knowledge, in the later videos he will create statistical tests for whether hypotheses are true. For example you can test the hypothesis that the arrival of cars follows a binomial probability distribution. The test I know of is called a Goodness of Fit test & Sal talks about it in a later video.
• Does this make sense for low n? I.e. if there are 9 cars that pass an hour then we use lambda = 9. We say this is equivalent to n*p, where n is the number and trials and p is the probability of a car passing in the number of trials. So say we only do 1 trial, then p = 9. This means there's a 900% chance of a car passing in one hour, lamba = n*p seems to make a little more sense when n is large (resulting in p less than 1). So is lambda = np a big assumption we make?
• I might be a little late to answer here as well, but I believe for a small number of trials, you would preferably use a RV with binomial distribution instead of a RV with Poisson distribution (since Poisson is for an infinite number of trials, ie: the interval of time between occurrences is negligible). Is this correct?
• why do we need the poisson process?? O-o
• Well, besides the traffic application Kahn exposed, we have several others. One of the most common is the telemarketing model which is basically a Poisson process. Another important example is the radioactive decay, there is a certain probability of a number of atoms to decay at every instant, which generates a Poisson process.
I am facing problems to identify a problem if it follows binomial or poisson or hypergeometric distribution.i get the question,have all the datas but cant find out which distribution i have to follow.are there any ways to identify them?
• You are able to build all the bridges from our ignorance to your knowledge, and step by step you allow our minds to cross. You are truly great! Thank you for doing what you do best and sharing it!
• why not that we could not conclude that when n approaches infinity, (1 + a/n) ^ n is just 1 (rather than e^a)?
• I got tripped up on this, too. Try plugging in an increasing n.
n=1, (1+(1/1))^1 = 2
n=2, (1+(1/2))^2 = 2.25.
n=3, (1+1/3))^3 = 2.37
You can see that as n increases, it doesn't near 0. That's because the n in the exponent outruns the 1/n.
• Hi Sal... at you simplified lim<n->∞> (1-λ/n)^n as e^-λ and lim(n->∞> (1-λ/n)^-k as 1. Question is when you could simplify (1-λ/n) as (1-0) as n approaches ∞ in lim(n->∞)(1-λ/n)^-k.. why did not you simplify the lim(n->∞)(1-λ/n)^n as (1-0)^n which would also be 1 as n approaches ∞. which would have left us with further simplified value of (λ^k/k!) ?
• This is a good question. The lim (n->∞) of (1+1/n)^n is the definition of the number e. On the year 2007 it was calculated (by computers) with 10^11 decimals.
• Right, so if n= success per smaller interval
and we want to work out as n -> infinity then surely that means n happens an infinite amount of times in the interval?

and also how can x!/(x-k)!= x(x-1)(x-2)...[x-(k+1)]

if we did x=7 and k=3 that means its 7!/(7-3)!=210 7(7-1)(7-2)(7-3)[7-(3+1)] = 3024 how can they be equal?
• I'm not sure why he is defining n as the number of successes in an interval. I thought n was the number of intervals in an hour.

As the number of intervals (n) goes to infinity, the probability of success (p) in any given interval goes to zero. This happens in a way such that n times p is constant (ie, lambda).