Source Coding Theorem
The Source Coding Theorem states that the entropy of an alphabet of symbols specifies to within one bit how many bits on the average need to be used to send the alphabet.
The significance of an alphabet's entropy rests in how we can
represent it with a sequence of bits. Bit
sequences form the "coin of the realm" in digital
communications: they are the universal way of representing
symbolic-valued signals. We convert back and forth between
symbols to bit-sequences with what is known as a
codebook: a table that associates symbols to bit
sequences. In creating this table, we must be able to assign a
unique bit sequence to each symbol so that
we can go between symbol and bit sequences without error.
You may be conjuring the notion of hiding information from
others when we use the name codebook for the
symbol-to-bit-sequence table. There is no relation to
cryptology, which comprises mathematically provable methods of
securing information. The codebook terminology was developed
during the beginnings of information theory just after World
War II.
As we shall explore in some detail elsewhere, digital communication is
the transmission of symbolic-valued signals from one place to
another. When faced with the problem, for example, of sending
a file across the Internet, we must first represent each
character by a bit sequence. Because we want to send the file
quickly, we want to use as few bits as possible. However, we
don't want to use so few bits that the receiver cannot
determine what each character was from the bit sequence. For
example, we could use one bit for every character: File
transmission would be fast but useless because the codebook
creates errors. Shannon
proved in his monumental work what we call today the
Source Coding Theorem. Let
denote the number of bits used to represent the symbol
. The average number of bits
required to represent the entire alphabet equals
. The Source Coding Theorem states that the
average number of bits needed to accurately
represent the alphabet need only to satisfy
Thus, the alphabet's entropy specifies to within one bit how
many bits on the average need to be used to send the alphabet.
The smaller an alphabet's entropy, the fewer bits required for
digital transmission of files expressed in that alphabet.
A four-symbol alphabet has the following probabilities.
and an entropy
of 1.75 bits. Let's see if we can find a codebook for
this four-letter alphabet that satisfies the Source Coding
Theorem. The simplest code to try is known as the simple
binary code: convert the symbol's index into a binary
number and use the same number of bits for each symbol by
including leading zeros where necessary.
Whenever the number of symbols in the alphabet is a power of
two (as in this case), the average number of bits
equals
, which equals
in this case. Because the entropy equals
bits, the simple
binary code indeed satisfies the Source Coding Theorem—we are
within one bit of the entropy limit—but you might wonder if
you can do better. If we choose a codebook with differing
number of bits for the symbols, a smaller average number of
bits can indeed be obtained. The idea is to use shorter bit
sequences for the symbols that occur more often. One codebook
like this is
Now
. We can reach the entropy limit! The simple
binary code is, in this case, less efficient than the
unequal-length code. Using the efficient code, we can transmit
the symbolic-valued signal having this alphabet 12.5%
faster. Furthermore, we know that no more efficient codebook
can be found because of Shannon's Theorem.