If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

# Algorithmic efficiency

How can we improve the speed of a (deterministic) primality test? Created by Brit Cruise.

## Want to join the conversation?

• • Algorithm is just a fancy shamancy word for the steps you need to do to get your answer.

Like, how do I get a batter out? Step 1: determine if he (or she) hits a good fly ball or foul? Step 2: if foul, wait, conserve energy, if into outfield, determine where and how fast? Step 3: if into outfield, can I get there fastest or can someone else? If I can, Step 4: be there with mitt up. Make sure it connects with ball.

Each step must be solved in its correct order and will affect the next step you take.
• @ Brit introduced the concept that the root of the number in question is the worst case. Why can we assume that if n/sqrt(n) is not divisible then it is prime? •  Hey James, in the CS program I explain this in detail (Level 1). Realize that every number must have a largest and smallest factor. As you search for a factor starting from 2 eventually you will hit the smallest factor. Well how big can the smallest factor be?
The worse case occurs when the smallest factor EQUALS the largest factor (such as 49=7*7). So given any number n, the smallest a factor can be is sqrt(n) OR ELSE it is prime.

Does this make sense? It's important!
• Using known properties of prime numbers to limit the search space is one way to improve the algorithmic efficiency, wouldn't changing the primary instruction set of the processor be another? • What if you could calculate all of the prime values up to a certain limit and then cache it into an array, and when you want to check if a number is prime or not, you could run a for loop to check if the test number is equal to any of the prime numbers in the array. If it is, then it's prime. Otherwise it is composite. I think this is unpractical and slow, but can't point out why. • There's a couple problems with this approach:
-It becomes a chicken before the egg type problem if you did that. How did you calculate the list of prime numbers in your array in the 1st place ? You would want to have the most efficient algorithm you could to find the numbers for that array.
-For large primes it can take a long time to figure out if they are prime. So it would take too much time (many many years) to test numbers from 2 to a huge number, and then put them in an array.

Making a list beyond the first trillion primes would become impractical, due to the time to make the list and the space it would require. However, we want to deal with much bigger primes, so it would be impractical to make a list of all of these really big primes we want to use.

As a side note, instead of organizing a list of primes in an array and searching with a for loop, if we organized it in a special kind of way called a tree, we could check to see if a number was in our list very quickly. Too bad making the list is not practical.
• What does the computer actually do to divide numbers? Does it do long division like a person does?

What I'm wondering is whether using tricks to check divisibility would be faster than dividing and looking at the remainder. Like, if the number ends in 2, 4, 6, 8, or 0, would checking that be faster than checking if it divides by two? (It could be something like "if last_digit in [2, 4, 6, 8, 0]") • computers divide in binary (ones and zeros), so the last digit in the numbers they see is either 1 or 0. They don't take remainders in an account unless they are told to (you would have to write the quotient into a variable that can contain fractions).

However, checking whether a binary number divides by two is very simple: if the last (rightmost) is 0, it divides by 2, if it's 0, it doesn't. This property sometimes comes in handy for certain computations.
• Did anyone else notice just how often the beginning of the number of steps approximated the digits of pi? Seeing 314..., 3142..., etc. several times surprised me. Is that related to how your number was constantly very close to the digits of the square of pi? (i.e. it was in the 987... range for most if not all of the time, and the square of pi is 9.8696...). I'm just guessing here, but if there's a connection there, that's pretty cool. Great videos! They really make me think. • The number of cycles for algorithm A is always equal to the square root of the number being tested, rounded down, and then plus one. This is present in the program because the variable representing the upper bound of numbers to try, called "upperBound," is set to floor(sqrt(inputNum)).

For example, to find the number of cycles for factoring 1000 using algorithm A, you would find the square root of 1000, which is about 31.62, then round it down to 31, and then add one to get 32.

So I guess it makes sense that if the number being factored is close to pi squared, the number of cycles would be close to pi. That was a really good observation though, and it took me a bit to figure it out. •  • 