Counting bits with Vapnik and Chervonenkis
Machine learning is about enabling computers to improve their performance on a given task as they get more data. Can we express this intuition quantitatively using information-theoretic techniques? In this post, I will discuss a classic paper by David Haussler, Michael Kearns, and Robert Schapire that (to the best of my knowledge) took the first step in this direction. In this post, I will describe some of their results, recast in a more explicitly information-theoretic way.
Information flow on graphs
Models of complex systems built from simple, locally interacting components arise in many fields, including statistical physics, biology, artificial intelligence, communication networks, etc. The quest to understand and to quantify the fundamental limits on the ability of such systems to store and process information has led to a variety of interesting and insightful results that draw upon probability, combinatorics, information theory, discrete and continuous dynamical systems, etc. In this post, I would like to focus on a model of distributed storage that was analyzed in 1975 by Donald Dawson in a very nice paper, which deserves to be more widely known.
Two public service announcements
1. The 2014 IEEE North American Summer School on Information Theory will take place June 18-21, 2014 at the Fields Institute in Toronto, Canada.
2. For those of you who use Matlab or Octave, there is a new Information Theoretical Estimators (ITE) toolbox, an open-source toolbox “capable of estimating many different variants of entropy, mutual information, divergence, association measures, cross quantities, and kernels on distributions.” Some more details are available in the guest post by the toolbox’s creator, Zoltán Zsabó, at the Princeton Information Theory b-log.
A graph-theoretic derivation of the Gilbert-Varshamov bound
Just a quick note for my reference, but it may be of interest to others.
Let denote the size of the largest code over a
-ary alphabet that has blocklength
and minimum distance
. The well-known Gilbert-Varshamov bound says that
where
is the volume of a Hamming ball of radius in
. The usual way of arriving at the GV bound is through a greedy construction: pick an arbitrary codeword
, then keep adding codewords that are at Hamming distance of at least
from all codewords that have already been picked. When this procedure terminates, the complement of the union of the Hamming balls of radius
around each of the codewords should be empty — otherwise, you will have at least one more codeword at distance of at least
from the ones already picked, and this would mean that the procedure could not have terminated.
As it turns out, there is another way of deriving the GV bound using graph theory that I have learned from a nice paper by Van Vu and Lei Wu. They use this graph-theoretic interpretation to arrive at an asymptotic improvement of the GV bound. Their result, which I will not go into here, extends an earlier result by Tao Jiang and Alex Vardy for binary codes. As far as I can tell, the graph-theoretic ideas go back to the Jiang-Vardy paper as well.
In order to proceed, we need some definitions and a lemma. Let be an undirected graph. A set
of vertices is called independent if no two vertices in
are connected by an edge. The independence number of
, denoted by
, is the cardinality of the largest independent set. The following lemma is folklore in graph theory:
Lemma 1 Suppose that
is
-regular, i.e., every vertex has exactly
neighbors. Then
Proof: Let be a maximal independent set. Any vertex
is connected by an edge to at least one
, because otherwise
would have to be included in
, which would contradict maximality. Therefore, there are at least
edges with one vertex in
and another vertex in
. On the other hand, because
is
-regular, there can be at most
such edges. This means that
Rearranging, we get (1).
Now let us construct the following graph (what Jiang and Vardy call the Gilbert graph): associate a vertex to each word in , and connect two vertices by an edge if and only if the Hamming distance between the corresponding words is at most
. This graph has
vertices, and each vertex has degree
. Moreover, there is a one-to-one correspondence between independent sets of vertices and
-ary codes of length
and minimum distance at least
, and the independence number of the Gilbert graph is equal to
. The bound (1) is then precisely the GV bound.
Briefly, the Vu-Wu improvement of the GV bound exploits the deep fact that, when the neighborhood of any vertex in a -regular graph is very sparse (in the sense that it has a lot fewer than
edges), the lower bound (1) can be significantly tightened. Apparently, actually counting the number of edges in such a neighborhood of any vertex of the Gilbert graph (by regularity, we may as well look at the neighborhood of the all-zero word) is rather complicated; Vu and Wu instead look at a suitable asymptotic regime when
is large and
for some
and replace exact combinatorial bounds by entropy bounds.
Lossless source coding at Western Union
… circa 1935. Here is a quote from Single-Story America (translated as Little Golden America), an American travelogue by Ilya Ilf and Yevgeny Petrov:
There is a whole book of readymade telegrams, long and convincing, lavishly composed telegrams for all occasions. Sending such a telegram costs only twenty-five cents. You see, what gets transmitted over the telegraph is not the text of the telegram, but simply the number under which it is listed in the book, and the signature of the sender. This is quite a funny thing, reminiscent of Drugstore Breakfast #2. Everything is served up in a ready form, and the customer is totally freed from the unpleasant necessity to think, and to spend money on top of it.
Typical set encoding, anyone?
ISIT 2013: two plenaries on concentration of measure
Of the five plenary talks at this year’s ISIT, two were about concentration of measure: Katalin Marton’s Shannon lecture on “Distance-divergence inequalities” and Gabor Lugosi’s talk on “Concentration inequalities and the entropy method” the next morning. Since the topic of measure concentration is dear to my heart, I thought I would write down a few unifying themes.
It’s for a good cause!
Endorse the petition to honor Claude Elwood Shannon with a United States Postal Service stamp on the 100th anniversary of his birth.
Stochastic kernels vs. conditional probability distributions
Larry Wasserman‘s recent post about misinterpretation of p-values is a good reminder about a fundamental distinction anyone working in information theory, control or machine learning should be aware of — namely, the distinction between stochastic kernels and conditional probability distributions.
Conditional mutual information and the best Markov approximation
I came across a neat and useful result about conditional mutual information while reading a paper on quantum information theory.
Public Service Announcement: Princeton-Stanford Information Theory b-log
Sergio Verdú has started a brand new Information Theory b-log that should be of interest to the readers of this blog. The ‘About’ page says:
Welcome to the Princeton-Stanford Information Theory b-log! All researchers working on information theory are invited to participate by posting items to the blog. Both original material and pointers to the web are welcome.
Enjoy!
leave a comment