=========================================================================== CSC 310 Lecture 17 Summary Fall 2007 =========================================================================== Topics: - Preview of Shannon's Noisy Coding Theorem - Example why we cannot use the BSC beyond capacity - Linear algebra review - References: sections 9.6, 9.7, 10.4. ----------------------------------------- Preview of Shannon's Noisy Coding Theorem ----------------------------------------- Statement: For every channel with capacity C, every desired error probability epsilon > 0, every desired rate R < C, there exists a code with length N having rate at least R such that probability of error when decoding is at most epsilon. Ideas: - Theoretically, for every R0, there is some code achieving that rate and that error probability. - The catch is that N might be huge. - In general, we don't have a stronger theorem saying that there is a "practical" code achieving those bounds. References: section 9.6, 9.7. ------------------------------------------------- Example why we cannot use the BSC beyond capacity ------------------------------------------------- Idea: - We saw that the capacity of the BSC with flip probability f is 1-H_2(f). - Suppose we have a "good" (N,K) block code for this channel. - Consider the following source: - generate K bits independently each with probabilities {.5,.5}; - generate N bits independently with probability f of getting a 1. - Given a string (x,y) generated by this source, compress it as follows: - use the (N,K) block code to produce codeword w for K-bit block x; - do bit-wise addition modulo 2 of w and y to get encoding z. - To decompress z, use the decoder for the (N,K) block code. Since y acts exactly like the "noise" that the channel would introduce (i.e., a bit in y is 1 with probability f, and only then do w and z differ in that position), and since our code is "good", the probability we will be able to decode w given z should be high. Then, derive y as w + z bit-wise modulo 2, and derive x from codeword w. - The entropy of the source is K*1 + N*H_2(f). - The size of our compression scheme will be N. - By Shannon's Noiseless Coding Theorem, N > K + N*H_2(f) - epsilon, thus K/N < 1 - H_2(f) + epsilon/N. - So the rate of our code cannot exceed the capacity 1 - H_2(f) by much. Reference: section 10.4 contains a slightly different example, but it works along the same lines. --------------------- Linear algebra review --------------------- - Review such terms as: field, vector space, subspace, linear independence, basis, dimension. - We will restrict our attention to channels with binary inputs {0,1}, and we will look at the input alphabet (F_2)^N of the N-th extension as an N-dimensional vector space over the field F_2. - An N-bit string is a vector in that space, and we can add vectors by doing bit-wise modulo 2 addition.