AP®︎/College Computer Science Principles
How do computers represent data?
When we look at a computer, we see text and images and shapes.
To a computer, all of that is just binary data, 1s and 0s.
The following 1s and 0s represents a tiny GIF:
A string of 0 and 1 numbers, 336 numbers long.
This next string of 1s and 0s represents a command to add a number:
A 16-character long string of 1s and 0s.
You might be scratching your head at this point. Why do computers represent information in such a hard to read way? And how can 1s and 0s represent so many different things? That's what we'll explore in this lesson.
To start off, check out the next video from Code.org where engineers from Microsoft and Adafruit introduce the basics of bits and binary data.
Want to join the conversation?
- Why do computers use 1s and 0s specifically? Wouldn't it become very challenging for them to convert trillions of 1s and 0s so using 2s, 3s, 4s, etc would shorten the input they need to process?(28 votes)
- It really has more to do with the way computers work. At the fundamental level, the transceiver is how the computer interprets anything (this is where you can find binary). A wire can either be sent electrical signals, or it cannot (there is no in between for on and off after all). This means that the representation for when a wire is sent an electrical signal has to be of 2 possible values. As such, binary is used.(78 votes)
How much jargon does one need to know before beginning the course?
2) Is there a jargon dictionary?
3) What is a GIF?(12 votes)
- Good question! There's a lot of jargon in the world of computers, so it's possible that I use jargon that some folks aren't familiar with.
A GIF is a type of image file that's popular on the internet these days, but you're right, "GIF" is jargon. I'd encourage learners to search the internet for jargon that is unfamiliar or ask a question as you've done here. I can then decide whether to reword something to avoid the jargon.
There is a vocabulary review here:
That only goes over the high-level vocabulary that's covered by the exam, it does not include all the jargon used in the articles and exercises.(17 votes)
- why do we have to use 1 and 0's why not different numbers?(2 votes)
- We use base 10 because we have 10 fingers. Computers at the lowest level only understand On/Off. On is represented by a 1 and off is represented by a 0. So we talk to computers using a series of ons and offs (1s and 0s).(9 votes)
- How do software engineers simplify this coding process? Or do they end up having to enter in all of those ones and zeros(3 votes)
- Why are only 1s and 0s used for binary data?(4 votes)
- Binary numbers (and binary data) are simply numbers represented in base-2 rather than base-10. (Base-10 is what we normally use to do math.) For a base-n, we can only have digits in the range [0, n-1]. So, base-2 numbers can only have digits between 0 and 1.(4 votes)
- What is Adafruit Industries?(3 votes)
- Adafruit Industries is a company that produces electronic hardware for people to use at home.(4 votes)
- Why is this even an AP Class in 2019(0 votes)
- Why wouldn't it be a class? Computers are an increasingly integral part of our lives.(32 votes)
- How do I write my name in binary code?(3 votes)
- You would need to look up the ASCII binary codes for the letters in your name. Once you know the codes, all you have to do is write each ASCII code in place of the letter it represents.(3 votes)
- What does process mean,preticily?(4 votes)
- Why do computers use 1s and 0s specifically? Wouldn't it become very challenging for them to convert trillions of 1s and 0s so using 2s, 3s, 4s, etc would shorten the input they need to process?(1 vote)
- Computers use 1s and 0s because data is stored as binary numbers. Using a larger number base would allow computers to shorten data representations, but binary data is very fast and easy for the computer to work with.(6 votes)