The Information Structuralist

Information flow on graphs

Posted in Information Theory, Models of Complex Stochastic Systems, Probability by mraginsky on May 3, 2014

Models of complex systems built from simple, locally interacting components arise in many fields, including statistical physics, biology, artificial intelligence, communication networks, etc. The quest to understand and to quantify the fundamental limits on the ability of such systems to store and process information has led to a variety of interesting and insightful results that draw upon probability, combinatorics, information theory, discrete and continuous dynamical systems, etc. In this post, I would like to focus on a model of distributed storage that was analyzed in 1975 by Donald Dawson in a very nice paper, which deserves to be more widely known.

(more…)

Two public service announcements

Posted in Information Theory, Public Service Announcements by mraginsky on February 25, 2014

1. The 2014 IEEE North American Summer School on Information Theory will take place June 18-21, 2014 at the Fields Institute in Toronto, Canada.

2. For those of you who use Matlab or Octave, there is a new Information Theoretical Estimators (ITE) toolbox, an open-source toolbox “capable of estimating many different variants of entropy, mutual information, divergence, association measures, cross quantities, and kernels on distributions.” Some more details are available in the guest post by the toolbox’s creator, Zoltán Zsabó, at the Princeton Information Theory b-log.

A graph-theoretic derivation of the Gilbert-Varshamov bound

Posted in Coding Theory, Information Theory, Mathematics by mraginsky on September 23, 2013

Just a quick note for my reference, but it may be of interest to others.

Let {A_q(n,d)} denote the size of the largest code over a {q}-ary alphabet that has blocklength {n} and minimum distance {d}. The well-known Gilbert-Varshamov bound says that

\displaystyle  A_q(n,d+1) \ge \frac{q^n}{V_q(n,d)},

where

\displaystyle  V_q(n,d) = \sum^d_{i=0}{n \choose i}(q-1)^i

is the volume of a Hamming ball of radius {d} in {\{0,\ldots,q-1\}^n}. The usual way of arriving at the GV bound is through a greedy construction: pick an arbitrary codeword {x}, then keep adding codewords that are at Hamming distance of at least {d+1} from all codewords that have already been picked. When this procedure terminates, the complement of the union of the Hamming balls of radius {d} around each of the codewords should be empty — otherwise, you will have at least one more codeword at distance of at least {d+1} from the ones already picked, and this would mean that the procedure could not have terminated.

As it turns out, there is another way of deriving the GV bound using graph theory that I have learned from a nice paper by Van Vu and Lei Wu. They use this graph-theoretic interpretation to arrive at an asymptotic improvement of the GV bound. Their result, which I will not go into here, extends an earlier result by Tao Jiang and Alex Vardy for binary codes. As far as I can tell, the graph-theoretic ideas go back to the Jiang-Vardy paper as well.

In order to proceed, we need some definitions and a lemma. Let {G = (V,E)} be an undirected graph. A set {S \subseteq V} of vertices is called independent if no two vertices in {S} are connected by an edge. The independence number of {G}, denoted by {I(G)}, is the cardinality of the largest independent set. The following lemma is folklore in graph theory:

Lemma 1 Suppose that {G} is {D}-regular, i.e., every vertex has exactly {D} neighbors. Then

\displaystyle  	I(G) \ge \frac{|V|}{D+1}. \ \ \ \ \ (1)

Proof: Let {I} be a maximal independent set. Any vertex {v \in V\backslash I} is connected by an edge to at least one {v' \in I}, because otherwise {v} would have to be included in {I}, which would contradict maximality. Therefore, there are at least {|V \backslash I| = |V| - |I|} edges with one vertex in {I} and another vertex in {V \backslash I}. On the other hand, because {G} is {D}-regular, there can be at most {D|I|} such edges. This means that

\displaystyle  	D|I| \ge |V| - |I|.

Rearranging, we get (1). \Box

Now let us construct the following graph (what Jiang and Vardy call the Gilbert graph): associate a vertex to each word in {\{0,\ldots,q-1\}^n}, and connect two vertices by an edge if and only if the Hamming distance between the corresponding words is at most {d}. This graph has {q^n} vertices, and each vertex has degree {D = V_q(n,d)-1}. Moreover, there is a one-to-one correspondence between independent sets of vertices and {q}-ary codes of length {n} and minimum distance at least {d+1}, and the independence number of the Gilbert graph is equal to {A_q(n,d+1)}. The bound (1) is then precisely the GV bound.

Briefly, the Vu-Wu improvement of the GV bound exploits the deep fact that, when the neighborhood of any vertex in a {D}-regular graph is very sparse (in the sense that it has a lot fewer than {{D \choose 2}} edges), the lower bound (1) can be significantly tightened. Apparently, actually counting the number of edges in such a neighborhood of any vertex of the Gilbert graph (by regularity, we may as well look at the neighborhood of the all-zero word) is rather complicated; Vu and Wu instead look at a suitable asymptotic regime when {n} is large and {d = \alpha n} for some {\alpha \le 1-1/q} and replace exact combinatorial bounds by entropy bounds.

Lossless source coding at Western Union

Posted in Information Theory by mraginsky on September 19, 2013

… circa 1935. Here is a quote from Single-Story America (translated as Little Golden America), an American travelogue by Ilya Ilf and Yevgeny Petrov:

There is a whole book of readymade telegrams, long and convincing, lavishly composed telegrams for all occasions. Sending such a telegram costs only twenty-five cents. You see, what gets transmitted over the telegraph is not the text of the telegram, but simply the number under which it is listed in the book, and the signature of the sender. This is quite a funny thing, reminiscent of Drugstore Breakfast #2. Everything is served up in a ready form, and the customer is totally freed from the unpleasant necessity to think, and to spend money on top of it.

Typical set encoding, anyone?

ISIT 2013: two plenaries on concentration of measure

Posted in Conference Blogging, Information Theory, Mathematics, Probability by mraginsky on July 29, 2013

Of the five plenary talks at this year’s ISIT, two were about concentration of measure: Katalin Marton’s Shannon lecture on “Distance-divergence inequalities” and Gabor Lugosi’s talk on “Concentration inequalities and the entropy method” the next morning. Since the topic of measure concentration is dear to my heart, I thought I would write down a few unifying themes.

(more…)

It’s for a good cause!

Posted in Information Theory, Public Service Announcements by mraginsky on May 2, 2013

Endorse the petition to honor Claude Elwood Shannon with a United States Postal Service stamp on the 100th anniversary of his birth.

Stochastic kernels vs. conditional probability distributions

Posted in Control, Feedback, Information Theory, Probability by mraginsky on March 17, 2013

Larry Wasserman‘s recent post about misinterpretation of p-values is a good reminder about a fundamental distinction anyone working in information theory, control or machine learning should be aware of — namely, the distinction between stochastic kernels and conditional probability distributions.

(more…)

Conditional mutual information and the best Markov approximation

Posted in Information Theory by mraginsky on January 4, 2013

I came across a neat and useful result about conditional mutual information while reading a paper on quantum information theory.

(more…)

Public Service Announcement: Princeton-Stanford Information Theory b-log

Posted in Information Theory, Public Service Announcements by mraginsky on January 3, 2013

Sergio Verdú has started a brand new Information Theory b-log that should be of interest to the readers of this blog. The ‘About’ page says:

Wel­come to the Princeton-Stanford Infor­ma­tion The­ory b-log! All researchers work­ing on infor­ma­tion the­ory are invited to par­tic­i­pate by post­ing items to the blog. Both orig­i­nal mate­r­ial and point­ers to the web are welcome.

Enjoy!

Concentrate, concentrate!

Posted in Information Theory, Mathematics, Narcissism, Papers and Preprints, Probability by mraginsky on December 19, 2012

Igal Sason and I have just posted to arXiv our tutorial paper “Concentration of Measure Inequalities in Information Theory, Communications and Coding”, which was submitted to Foundations and Trends in Communications and Information Theory. Here is the abstract:

This tutorial article is focused on some of the key modern mathematical tools that are used for the derivation of concentration inequalities, on their links to information theory, and on their various applications to communications and coding.

The first part of this article introduces some classical concentration inequalities for martingales, and it also derives some recent refinements of these inequalities. The power and versatility of the martingale approach is exemplified in the context of binary hypothesis testing, codes defined on graphs and iterative decoding algorithms, and some other aspects that are related to wireless communications and coding.

The second part of this article introduces the entropy method for deriving concentration inequalities for functions of many independent random variables, and it also exhibits its multiple connections to information theory. The basic ingredients of the entropy method are discussed first in conjunction with the closely related topic of logarithmic Sobolev inequalities. This discussion is complemented by a related viewpoint based on probability in metric spaces. This viewpoint centers around the so-called transportation-cost inequalities, whose roots are in information theory. Some representative results on concentration for dependent random variables are briefly summarized, with emphasis on their connections to the entropy method.

Finally, the tutorial addresses several applications of the entropy method and related information-theoretic tools to problems in communications and coding. These include strong converses for several source and channel coding problems, empirical distributions of good channel codes with non-vanishing error probability, and an information-theoretic converse for concentration of measure.

There are already many excellent sources on concentration of measure; what makes ours different is the emphasis on information-theoretic aspects, both in the general theory and in applications. Comments, suggestions, thoughts are very welcome.

Follow

Get every new post delivered to your Inbox.

Join 45 other followers