Supplementary_infotheory

Lecture 1 - Data Compression

Contributor - Sahasranand Kodinthirapully Ramanadhan

In the first part of this tutorial, we discuss fixed-length, almost lossless compression. We start by deriving single-shot bounds for the minimum length of block source codes and then extend them to yield the optimal source code rate for a discrete memoryless source. In the second part, we characterize the minimum expected codeword lengths for variable-length, lossless compression using uniquely decodable codes, prefix-free codes, and non-singular codes.

Lecture 2 - ECC

Contributor - Vinay Kumar Bindiganavile Ramadas

In this lecture, we first review Shannon’s channel coding theorem and establish the need for designing codes which can be implemented in practice. A brief history of error correction codes and few practical examples are showcased. This is followed by a description of linear codes with the spotlight on Hamming codes. Decoding algorithms for these are outlined. Random linear codes are shown to achieve the capacity for the BSC. Recent advances in the design of capacity-achieving/ capacity-approaching codes and the techniques used are highlighted.

Lecture 3 - Quantization

Contributor - Shubham Kumar Jha

In this lecture, we will study about the following:

  1. Scalar quantization: Uniform quantizers, performance measures, high resolution
  2. Concept of Dithering.

Lecture 4 - Information-theoretic lower bounds

Contributor - Aditya Vikram Singh

Information-theoretic lower bounds in statistics: Fano and Assouad methods.

Lecture 5 - Information Theory and Large Deviations

Contributor - Sarath Ampadi Yasodharan

In this lecture we study the probabilities of large deviations of the empirical measure of n independent and identically distributed (i.i.d.) random variables from their true distribution gamma. We first introduce a variational formula for positive functions of i.i.d. random variables in terms of the relative entropy gamma We then use this to prove Sanov’s theorem, i.e., we show that the probability that the empirical measure is close to a distribution mu decays as shown in the video.

Lecture 6 - Information Geometry

Contributor - Karthik Periyapatna Narayanaprasad

In this lecture we will discuss Information Geometry and Its Applications to Statistics

Lecture 7 - Concentration Inequalities

Contributor - Lekshmi Ramesh

In this lecture, we will cover the basics of a very useful set of tools that are used to study tail probabilities of random variables. We will start by introducing a basic toolkit for studying sums of independent random variables –this will include the Chernoff, Hoeffding, and Bernstein bounds, and then cover some techniques for handling more general functions of random variables. We will then see how ideas from information theory can be used to derive more general inequalities, and this will be demonstrated by a brief discussion of the entropy method. Towards the end of the lecture, we will see some applications of these tools.

Lecture 8 - Shannon’s Channel Coding Theorem

Contributor - Shubham Kumar Jha

In this lecture, we will study about repetition codes, Shannon’s channel coding theorem, and evaluate the capacities for BEC, BSC, AWGN channels.