The Information Structuralist

A graph-theoretic derivation of the Gilbert-Varshamov bound

Posted in Coding Theory, Information Theory, Mathematics by mraginsky on September 23, 2013

Just a quick note for my reference, but it may be of interest to others.

Let {A_q(n,d)} denote the size of the largest code over a {q}-ary alphabet that has blocklength {n} and minimum distance {d}. The well-known Gilbert-Varshamov bound says that

\displaystyle  A_q(n,d+1) \ge \frac{q^n}{V_q(n,d)},

where

\displaystyle  V_q(n,d) = \sum^d_{i=0}{n \choose i}(q-1)^i

is the volume of a Hamming ball of radius {d} in {\{0,\ldots,q-1\}^n}. The usual way of arriving at the GV bound is through a greedy construction: pick an arbitrary codeword {x}, then keep adding codewords that are at Hamming distance of at least {d+1} from all codewords that have already been picked. When this procedure terminates, the complement of the union of the Hamming balls of radius {d} around each of the codewords should be empty — otherwise, you will have at least one more codeword at distance of at least {d+1} from the ones already picked, and this would mean that the procedure could not have terminated.

As it turns out, there is another way of deriving the GV bound using graph theory that I have learned from a nice paper by Van Vu and Lei Wu. They use this graph-theoretic interpretation to arrive at an asymptotic improvement of the GV bound. Their result, which I will not go into here, extends an earlier result by Tao Jiang and Alex Vardy for binary codes. As far as I can tell, the graph-theoretic ideas go back to the Jiang-Vardy paper as well.

In order to proceed, we need some definitions and a lemma. Let {G = (V,E)} be an undirected graph. A set {S \subseteq V} of vertices is called independent if no two vertices in {S} are connected by an edge. The independence number of {G}, denoted by {I(G)}, is the cardinality of the largest independent set. The following lemma is folklore in graph theory:

Lemma 1 Suppose that {G} is {D}-regular, i.e., every vertex has exactly {D} neighbors. Then

\displaystyle  	I(G) \ge \frac{|V|}{D+1}. \ \ \ \ \ (1)

Proof: Let {I} be a maximal independent set. Any vertex {v \in V\backslash I} is connected by an edge to at least one {v' \in I}, because otherwise {v} would have to be included in {I}, which would contradict maximality. Therefore, there are at least {|V \backslash I| = |V| - |I|} edges with one vertex in {I} and another vertex in {V \backslash I}. On the other hand, because {G} is {D}-regular, there can be at most {D|I|} such edges. This means that

\displaystyle  	D|I| \ge |V| - |I|.

Rearranging, we get (1). \Box

Now let us construct the following graph (what Jiang and Vardy call the Gilbert graph): associate a vertex to each word in {\{0,\ldots,q-1\}^n}, and connect two vertices by an edge if and only if the Hamming distance between the corresponding words is at most {d}. This graph has {q^n} vertices, and each vertex has degree {D = V_q(n,d)-1}. Moreover, there is a one-to-one correspondence between independent sets of vertices and {q}-ary codes of length {n} and minimum distance at least {d+1}, and the independence number of the Gilbert graph is equal to {A_q(n,d+1)}. The bound (1) is then precisely the GV bound.

Briefly, the Vu-Wu improvement of the GV bound exploits the deep fact that, when the neighborhood of any vertex in a {D}-regular graph is very sparse (in the sense that it has a lot fewer than {{D \choose 2}} edges), the lower bound (1) can be significantly tightened. Apparently, actually counting the number of edges in such a neighborhood of any vertex of the Gilbert graph (by regularity, we may as well look at the neighborhood of the all-zero word) is rather complicated; Vu and Wu instead look at a suitable asymptotic regime when {n} is large and {d = \alpha n} for some {\alpha \le 1-1/q} and replace exact combinatorial bounds by entropy bounds.

ISIT 2013: two plenaries on concentration of measure

Posted in Conference Blogging, Information Theory, Mathematics, Probability by mraginsky on July 29, 2013

Of the five plenary talks at this year’s ISIT, two were about concentration of measure: Katalin Marton’s Shannon lecture on “Distance-divergence inequalities” and Gabor Lugosi’s talk on “Concentration inequalities and the entropy method” the next morning. Since the topic of measure concentration is dear to my heart, I thought I would write down a few unifying themes.

(more…)

Concentrate, concentrate!

Posted in Information Theory, Mathematics, Narcissism, Papers and Preprints, Probability by mraginsky on December 19, 2012

Igal Sason and I have just posted to arXiv our tutorial paper “Concentration of Measure Inequalities in Information Theory, Communications and Coding”, which was submitted to Foundations and Trends in Communications and Information Theory. Here is the abstract:

This tutorial article is focused on some of the key modern mathematical tools that are used for the derivation of concentration inequalities, on their links to information theory, and on their various applications to communications and coding.

The first part of this article introduces some classical concentration inequalities for martingales, and it also derives some recent refinements of these inequalities. The power and versatility of the martingale approach is exemplified in the context of binary hypothesis testing, codes defined on graphs and iterative decoding algorithms, and some other aspects that are related to wireless communications and coding.

The second part of this article introduces the entropy method for deriving concentration inequalities for functions of many independent random variables, and it also exhibits its multiple connections to information theory. The basic ingredients of the entropy method are discussed first in conjunction with the closely related topic of logarithmic Sobolev inequalities. This discussion is complemented by a related viewpoint based on probability in metric spaces. This viewpoint centers around the so-called transportation-cost inequalities, whose roots are in information theory. Some representative results on concentration for dependent random variables are briefly summarized, with emphasis on their connections to the entropy method.

Finally, the tutorial addresses several applications of the entropy method and related information-theoretic tools to problems in communications and coding. These include strong converses for several source and channel coding problems, empirical distributions of good channel codes with non-vanishing error probability, and an information-theoretic converse for concentration of measure.

There are already many excellent sources on concentration of measure; what makes ours different is the emphasis on information-theoretic aspects, both in the general theory and in applications. Comments, suggestions, thoughts are very welcome.

ISIT 2011: plenaries, the Shannon lecture

Posted in Conference Blogging, Information Theory, Mathematics by mraginsky on August 30, 2011

Better late than never, right? Besides, Anand all but made sure that I would blog about it eventually.

I have attended this year’s Shannon lecture and all the plenaries except for Wojtek Szpankowski‘s talk on the information theory of algorithms and combinatorics (I bravely fought the jetlag and lost), but you can watch it here. Here are, shall we say, some impressionistic sketches based on the notes I was taking.

(more…)

Blackwell’s proof of Wald’s identity

Posted in Mathematics, Probability by mraginsky on April 29, 2011

Every once in a while you come across a mathematical argument of such incredible beauty that you feel compelled to tell the whole world about it. This post is about one such gem: David Blackwell’s 1946 proof of Wald’s identity on the expected value of a randomly stopped random walk. In fact, even forty years after the publication of that paper, in a conversation with Morris DeGroot, Blackwell said: “That’s a paper I’m still very proud of. It just gives me pleasant feelings every time I think about it.”

(more…)

What have the Romans ever done for us?

Posted in Mathematics, Nuggets of Wisdom by mraginsky on April 20, 2011

In Alfréd Rényi‘s Dialogues on Mathematics, Archimedes says this to King Hieron:

… Mathematics rewards only those who are interested in it not only for its rewards but also for itself. Mathematics is like your daughter, Helena, who suspects every time a suitor appears that he is not really in love with her, but is only interested in her because he wants to be the king’s son-in-law. She wants a husband who loves her for her own beauty, wit and charm, and not for the wealth and power he an get by marrying her. Similarly, mathematics reveals its secrets only to those who approach it with pure love, for its own beauty. Of course, those who do this are also rewarded with results of practical importance. But if somebody asks at each step, “What can I get out of this?” he will not get far. You remember I told you that the Romans would never be really successful in applying mathematics. Well, now you can see why: they are too practical.

I couldn’t help but think of this little gem.