If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

## Computer science

what is our memory limit? How can save time at the expense of space? Created by Brit Cruise.

## Want to join the conversation?

• In this video, didn't you just assume that all of the prime numbers less than the square root of the max number used 24 bits of storage, even though that was just for the largest prime? Wouldn't the amount of storage actually be less than 14.7 megabytes, because the smaller primes don't take up nearly as much space? •  Isaac,

That's true, but that would create all kinds of problems of it's own. If you just assume that each number takes the same amount of space, the computer just has to go to a particular spot in memory and pull out the next (in this case) 24 1's and 0's. If you want to go to the 700th number in the array, and you know every number takes 24 bits, you go to the 700*24= 16800th bit, and pull out that bit plus the next 23. Pretty easy.

But let's say that you don't do that. Instead we decide to store each number in the fewest bits required to WRITE that number. So, 0 and 1 each take 1 bit, 2 and 3 take 2 bits, 4, 5, 6, and 7 take 3 bits… et cetera.

So, now we have our list of primes stored in memory, and each number only takes exactly as much size as is required. Let's say we know that our memory starts at address 0, and contains the number 2, which is the first prime. So to get that number out, we would have to take out 2 bits, because the number 2 takes 2 bits to encode.

How do we know where, say, the 700th number starts? In order to even compute where that number is in memory, we would have to compute all of the first 699 prime numbers, and THEN determine how many bits each one requires, and THEN add that all up, and THEN we would know where the 700th prime number lives in memory.

Of course, that's a problem, because the whole point of storing all the primes in memory was so that you wouldn't have to compute them on the fly-- but if each one takes a different number of bits, then you have to compute them on the fly anyways just to figure out where in memory the next one lives.

And there is another problem. Let's say we could magically know where in memory each prime number starts. So, we knew for example that the 700th prime number started at memory address 5,302,789. We still wouldn't know how many bits to remove to get the 700th prime number. If we remove too few, the string of 0's and 1's would be a different number that would be much less than the actual 700th prime number (1/2 as large, if we dropped off only 1 bit, and 1/2 as large AGAIN for every bit we forgot). If we pulled out too many digits, we would get part of the NEXT prime number. The only way to pull out the correct number of bits for the 700th prime number would be to know before-hand what the 700th prime number WAS.

And of course, if we already know what the number is… why would we need to look it up in an array in the first place?

So using the same number of bits for each number is really the only way make this work practically. Certainly you CAN store the primes in less space… but it isn't useful, because it would make it almost impossible to get that information back OUT again.
• Consider the following scenario:
1 - Since pre-calculated array of primes is immutable the 14.7 MB could be stored in the flash drive (Which have 2 GB of available space, even if we only have 1% quota for that drive it would give us around 20.48 MB. More than enough space).
2 - Since we walk the array in only one direction, from the beginning to end, one index at a time, instead of loading all of the array into RAM memory we could break the array in chunks and load the chunk of the array we need only when we need it (that's is called lazy load).
The code to lazy load a chunk of the array is relatively simple and could be part of the operating system (stored in the ROM).
So... We will need a few number of additional steps (let's say 2 to 3) to load the next chunk of the array every N steps, where N is the size of the chunk. Every time we reach the end of the chunk we load the next part.
And (to be fair) also will also use 1 additional byte to store the number of the chunk we are working now to know the get the next one.
So the amount of RAM memory required by the chunk would be actually the size of the chunk times 24 bits.
Say we decide N = 1,048,576 (1024^2). In that case the chunk would occupy exactly 3 MB of RAM. An amount inside our limit.
Our maximum index is 5,166,823 so we would need to reload the chunk 5 times (5,166,823 / 1,048,576 = 4.92... => 5 rounded up).
That would required 10 to 15 extra steps in total which, in my opinion, could be considered acceptable in a matter of performance and probably will not affect much the computational time.
The size N would determine how many times a new chunk would need to be loaded and so the number of additional step.
In our case the optimal size would be N = 3,145,728 (3 * 1023^2) which would occupy 9 MB of RAM (still within our limit) and required only one reload operation.
What do you think about that? • I applaud your technical prowess! Bravo! However, you are making the assumption that we can do a lazy load - which really just moves memory around. If suddenly we find out that we have to deal with primes up to 100 digits, this method will fail, along with any method based on trial division.

To solve this problem it will require some new mathematics. ;-)
• I've been reading a lot recently about Deep Blue's historic win over former world champion Garry Kasparov. Would the same principles of efficiency, time space tradeoff, and binary memory apply to another extremely difficult calculation with billions of possibilities, like a favorable chess position? Also, IBM has said that Deep Blue mainly used brute-force to calculate every position possible in, say, 10 moves. Is this even possible? Also, Deep Blue used over 20 chips specially designed to play chess to beat Garry Kasparov in 1997. However, in 2009, a chess engine called Hiarcs 13 has a FIDE rating of around 2500, only slightly less than Deep Blue (still enough to crush me or you mercilessly), but Hiarcs 13 runs on an unmodified HTC smartphone. Is this because of the increased efficiency of the program, or is it because of Moore's Law? • Chess programs look at future moves and evaluate how good each position is for each player. The evaluation of the position involves searching several moves into the future to come up with a score for the position. It is possible, and often likely, that during a game a position will need to be evaluated more than once.

This offers a classic time space trade off. After we evaluate a position, we can store the result, and save the time required to calculate it in the future. This improvement in speed comes at the price of space.

Notably in chess, the number of possibilities at the front and end of a game are relatively limited, in comparison to the explosion of possibilities in the middle part of a game. Thus, 10 moves at the start of a game is not comparable to 10 moves in the middle of a game.

How newer chess programs perform so well on regular computing devices:
-Pruning: eliminate choices that don't look promising from searches early on to save time. (Deep Blue avoided doing a lot of pruning, because pruning can accidently eliminate good positions too)
-Opening move tables: tables which contain optimal opening move sequences
-End game tables: tables which find positions where a player can force a mate
-Refutation tables: tables which identify promising looking moves that, in fact, will end up to be bad moves in the future
-Time limits: Deep Blue had to work under tight time constraints, which we typically don't force our programs to work under
-Moore's Law

Hope this makes sense
• http://en.wikipedia.org/wiki/Consistent_Overhead_Byte_Stuffing we can use this to reduce the space need to store the data in memory to 26 bit for the large number and 3 bit for the smallest. • This Question is mostly about Technoligy. I just thought of this a while ago. Now say you have a time machine that can go only into the future. So you use you time machine to go 100 years in to the future. Once there you see floating cars and you see that you are on a floating city in the clouds. But had the world actually changed that much? Now let me explain. As new problems arise we make something to fix it. Right? Yes I think that's true most of the time. As we go along we keep fixing whatever problems arise. Soon we arive at floating cars and and cities in the clouds. Then we keep fixing more problems. Let me explain again(This is really hard to explain) in a better way. Say you have a computer hardrive. This is very old than new ones come in you buy one. It does the same function. Right? Yes, yes it does. But soon this drive becomes the new normal. This goes on for thousands of years. Until traveling to the edges of the universe becomes the new normal. Yeah sure the world has changed but everything's still normal. So has it really changed all that much? This was very hard for me to put in words. Please can someone answer it? I didn't know exactly where to put this question. • It hasn't changed at all, in terms of normalcy. You are exactly right, as humanity progresses, it redefines 'normal' along the way. Just 50 years ago, someone might have seen a CD player, and they would have been astounded at the height of future technology. Now, we think that CD's are old fashioned, and use MP3's instead. An iPhone 1 might have been seen as magic one hundred years ago, but now it is just an outdated model.
• Why can't we use bits of different shapes or compositions, so as we are able to code numbers in base 3 or higher? Surely it would be more efficient? • I would argue that the fundamental reasons we use binary, as opposed to a different base are:
- all the operations performed by our modern computers (by their arithmetic logic units (ALUs)) are made up of logic gates (which are transistors) which perform, AND, OR and NOT operations. These operations take true and false as inputs and return true or false as an output. True and false easily map onto 0 and 1 used in binary, but would not map well onto base 3 or higher. We have been able to efficiently make these logic gates faster and faster and smaller and smaller throughout the years.
-Computers are multipurpose, so in order to perform tasks on a computer beyond just calculations (add, subtract, multiply, divide) we want to use logic gates, because they are able to take any possible set of inputs and produce any possible set of outputs (they are "functionally complete").
-A competing technology (super small and super fast) that takes 2 inputs with more than 2 states and produces an ouput with more than 2 states has not been developed.
• I know about RAM (Random Access Memory), ROM (Read-Only Memory, for system data), but not flash memory. Can someone explain to me what it is? • • To the problem with bits... Theoreticaly if you could manipulate every atom in observable universe as a bit, you can store number 2 to the 10 to the 80 power which would possibly store all of these primes i think... Am I wrong? • Quantum "computing" systems may bypass encrypted information as well as eliminating the point A to point B data Pathways. Looks like communications secrecy may become almost impossible as quantum systems are introduced. Entanglement is only the very top of the iceberg. Something closely related to telepathic communications is where this is headed. maybe hard to believe but so was the idea of television, 200 years ago. • Quantum computers do not make communications secrecy almost impossible.

For symmetric key ciphers:
-One time pad will be unaffected.
-Symmetric key ciphers only have there keys size reduce in half (solution double the key size)

For public key ciphers:
-RSA and Diffie Hellman will be broken
-Any cipher that relies on prime factoring or discrete logs being hard will be broken
-Public key ciphers that won't be broken are those based on:
-lattice problems (NTRU relies on these)
-error correcting codes
-multivariate polynomials over finite fields
-other hard problems
(1 vote)