Computer science theory
- Symbol rate
- Introduction to channel capacity
- Message space exploration
- Measuring information
- Origin of Markov chains
- Markov chain exploration
- A mathematical theory of communication
- Markov text exploration
- Information entropy
- Compression codes
- Error correction
- The search for extraterrestrial intelligence
Introduction to Channel Capacity & Message Space. Created by Brit Cruise.
Want to join the conversation?
- Why didn't they use multiple strings?(50 votes)
- Multiple wire/strings would increase the cost. The cost doesn't increase all that much when the distance is short say one room to another room. But when the distance is longer, say from London to New York, the cost to build/maintain the wire increases significantly. So it makes more sense, economically, to find more efficient ways to pass a message along just one wire than over multiple.(72 votes)
- At1:32, even without electrical noise, wouldn't a message with over a million possible signals be very difficult to translate into plain English? Because then the receiver would need a list with over 1,000,000 possible signals.(16 votes)
- No, this is an arbitrarily easy task for a computer to accomplish. What you would do is put logic gates on the end of the cable and do automatic calculations to build a truth table and translate it into binary.(27 votes)
- At1:07, when he was talking about Edison's system, wouldn't it not be very good, because he was using a switch system? I'll use a light switch for my example. If the switch was on, it is impossible to turn it on again without turning it off, so wouldn't the same thing apply to Edison's system?(15 votes)
- Excellent question. I should include this in the upcoming challenge. One way to think about these types of problems (sending multiple 1's) is to introduce a time division. For example, ever second we could measure the state to see if it is on or off. If we leave out light on continuously for 3 seconds it would represent 1 1 1. Does this help?(29 votes)
- 2:14What are the "late effects of the Big Bang"( if I have written the phrase down correctly) and how do they distort the message received?(15 votes)
- Microwave background radiation. It is a source of radio and electrical interference that is coming at us from almost everywhere else in space; everywhere people have listened, using suitable equipment such as a directional radio antenna like a dish. The theory is that it came from a time close to the start of the universe. The theory is supported by several other observations. Radio noise also comes from nuclear fusion in our Sun, but the rest of the universe is much bigger than our Sun. Hope that helps!(18 votes)
- What is considered an acceptable amount of background noise for a commonly used cable? Is shielding on the cables allow us to squeeze more channels onto the cable by lowering the range of "no mans land" we need to put between varying channels?(10 votes)
- Im pretty sure the acceptable amount of background noise depends on the application any cable is being used for... like sending analog phone audio over a telephone wire would be different then digital computer data on ethernet a cable. Also there are other things to factor in such as the distance needed to transmit a signal over a wire... the EM background noise is not a concern for transmitting a signal from a modem to a computer a few feet away. But EM background noise is a concern for sending a signal long distances say from europe to asia where you have to consider factors such as the signal moving over various sizes and types of cable also amplification of signal voltage and a bunch more things that all effect the background noise and the clarity of the signal.
The shielding on wire cables reduces EM noise caused by interference from outside the cable caused by things like electric or magnetic fields. The shielding allows for a clearer signal. However shielding only protects from outside interference from outside the length of wire, there is still EM noise that is unavoidable within any electric system.
The goal is a clear signal to travel from point A to point B along a channel (wire). The clearer the signal is the more information can travel through the channel at a time.(11 votes)
- I am working my way through "A Mathematical Theory of Communication”, C.E. Shannon's 1948 paper and have hit a snag that I am hoping someone can help me out with. I am usually pretty good at picturing this kind of thing but in the case below I must be missing something.
In Part I Shannon says that N(t) represents the number of sequences of duration t (sequences in this case being symbols of a defined alphabet.) He goes on to say that N(t) = N(t-t1) + N(t-t2) + ... + N(t-tn) where t1, …, tn represent the transmit time of the symbols S1, …, Sn of the given alphabet. He further states that the total number [N(t)] is equal to the sum of the number of sequences ending in S1, S2, ..., Sn and that these sequences are the N(t-ti) terms given above.
My problem is that I do not understand how N(t) for some arbitrary t could equal the series given above. It seems to me that N(t) would equal a*N(t1) + b*N(t2) + ... + n*N(tn) where the values a, …, n are the (not necessarily unique) constants needed to reach N(t). Furthermore I am confused as to how N(t-ti) represents a sequence ending in Si for some symbol i.
Please understand that I am not criticizing or challenging Shannon, I am trying to understand how the math fits together and feel like I am missing some obvious point.
Any help would be greatly appreciated.
- N(t) is the number of allowed signals of duration t.
Suppose we have only two signals S1 and S2.
S1 requires t1 seconds to transmit and S2 takes t2 seconds to transmit.
We could find N(t) by adding:
- the number of allowed signals of duration t that end with S1
- the number of allowed signals of duration t that end with S2
We could generate all the allowed signals of duration t that end with S1 by:
-finding ALL the signals that take (t-t1) seconds to transmit
-add our S1 signal (that takes t1 seconds) onto the end of each of those signals
- (t-t1)+t1 = t , so each of these new signals would be the right length
The number of signals that take (t-t1) seconds to transmit is N(t-t1). This is also the number of new signals, ending in S1, that we just generated.
Similarly, we could generate all the allowed signals of duration t that end with S2 by:
-finding ALL the signals that take (t-t2) seconds to transmit
-add our S2 signal (that takes t2 seconds) onto the end of each of those signals
- (t-t2)+t2 = t , so each of these new signals would be the right length
The number of signals that take (t-t2) seconds to transmit is N(t-t2). This is also the number of new signals, ending in S2, that we just generated.
number of allowed signals of duration t that end with S1
+ the number of allowed signals of duration t that end with S2
N(t) = N(t-t1) + N(t-t2)
The same logic applies for any number of signals
Hope this makes sense(7 votes)
- Would it be possible to calculate the channel capacity of spoken words? the transmitting end (the speaker) and the receiving end (the listener) are both limited in message difference and symbol rate, I'm just wondering if it would be possible to calculate.(4 votes)
- What about the message(s)? How would you include that in your calculations?
My first thought is that you would have to use some sort of average...
Really, it depends what the speaker says to the listener!(3 votes)
- wow, so what does this have to do with morse code?(3 votes)
- Morse code requires very low channel capacity because it is only sending 1 bit (the lowest unit of information) at a time.(4 votes)
- why didnt they just make 26 voltages?(2 votes)
- Because it's difficult to read a signal with 26 different voltages in a short time, noise would be a major issue and cables characteristics won't allow you to effectively send 26 voltages over a long distance.(3 votes)
- So, for example Baudot Telegraph Machine.
Its 's' is 2 (pulse and flat), and its 'n' is 5 (5 key possibility for each character)? Is it correct?
Then what is the difference between capacity and message space?(3 votes)
Voiceover: It also became clear that there was one other way to increase the capacity of a communication system. We can increase the number of different signaling events. For example, with Alice and Bob's string communication system, they soon found that varying the type of plucks allowed them to send their messages faster. For example, hard, medium versus soft plucks or high-pitch versus low-pitch plucks by tightening the cable different amounts. This was an idea implemented by Thomas Edison, which he applied to the Morse code system, and it was based on the idea that you could use weak and strong batteries to produce signals of different strengths. He also used two directions, as Gauss and Weber did, forward versus reverse current and two intensities. So he had plus three volts, plus one volt, minus one volt, and minus three volt. Four different current values which could be exchanged. It enabled Western Union to save money by greatly increasing the number of messages the company could send without building new lines. This is known as the Quadruplex telegraph and it continued to be used into the 20th century. But again, as we expanded the number of different signaling events, we ran into another problem. For example, why not send a thousand or a million different voltage levels per pulse? Well as you may expect, fine grained differences lead to difficulties on the receiving end. And with electrical systems, the resolution of these differences is always limited by electrical noise. If we attach a probe to any electrical line, and zoom in closely enough, we will always find minute undesired currents. This is an unavoidable result of natural processes such as heat or geomagnetic storms and even latent effects of the Big Bang. So the differences between signaling events must be great enough that noise cannot randomly bump a signaling event from one type to another. Clearly now we can step back and begin to define the capacity of a communication system using these two very simple ideas. First, how many symbol transfers per second? Which we called symbol rate. And today it's known simply as baud, for Émile Baudot. And we can define this as n where it's n symbol transfers per second. And number two, how many differences per symbol? Which we can think of as the symbol space. How many symbols can we select from at each point? And we can call this s. And as we've seen before, these parameters can be thought of as a decision tree of possibilities because each symbol can be thought of as a decision where the number of branches depend on the number of differences. And after n symbols, we have a tree with s to the power of n leaves. And since each path through this tree can represent a message, we can think of the number of leaves as the size of the message space. This is easy to visualize. The message space is simply the width of the base of one of these trees. And it defines the total number of possible messages one could send given a sequence of n symbols. For example, let's say Alice sends Bob a message which consists of two plucks and they are using a hard versus soft pluck as their communication system. This means she has the ability to define one of four possible messages to Bob. And if instead they were using a system of hard versus medium versus soft plucks, then with two plucks, she had the ability to define one of, three to the power of two equals, nine messages. And with three plucks, this jumps to one of 27 messages. Now if, instead, Alice and Bob were exchanging written notes in class, which contain only two letters on a piece of paper, then a single note would contain one of 26 to the power of two, or 676 possible messages. It's important to realize now that we no longer care about the meaning applied to these chains of differences, merely how many different messages are possible. The resulting sequences could represent numbers, names, feelings, music, or perhaps even some alien alphabet we could never understand. When we look at a communication system now, we can begin to think about it's capacity as how many different things you could say and we could then use the message space to define exactly how many differences are possible in any situation. And this simple yet elegant idea forms the basis for how information will be later defined. And this is the final step that brings us to modern information theory. It emerges in the early 20th century.