The Information Structuralist

Concentrate, concentrate!

Posted in Information Theory, Mathematics, Narcissism, Papers and Preprints, Probability by mraginsky on December 19, 2012

Igal Sason and I have just posted to arXiv our tutorial paper “Concentration of Measure Inequalities in Information Theory, Communications and Coding”, which was submitted to Foundations and Trends in Communications and Information Theory. Here is the abstract:

This tutorial article is focused on some of the key modern mathematical tools that are used for the derivation of concentration inequalities, on their links to information theory, and on their various applications to communications and coding.

The first part of this article introduces some classical concentration inequalities for martingales, and it also derives some recent refinements of these inequalities. The power and versatility of the martingale approach is exemplified in the context of binary hypothesis testing, codes defined on graphs and iterative decoding algorithms, and some other aspects that are related to wireless communications and coding.

The second part of this article introduces the entropy method for deriving concentration inequalities for functions of many independent random variables, and it also exhibits its multiple connections to information theory. The basic ingredients of the entropy method are discussed first in conjunction with the closely related topic of logarithmic Sobolev inequalities. This discussion is complemented by a related viewpoint based on probability in metric spaces. This viewpoint centers around the so-called transportation-cost inequalities, whose roots are in information theory. Some representative results on concentration for dependent random variables are briefly summarized, with emphasis on their connections to the entropy method.

Finally, the tutorial addresses several applications of the entropy method and related information-theoretic tools to problems in communications and coding. These include strong converses for several source and channel coding problems, empirical distributions of good channel codes with non-vanishing error probability, and an information-theoretic converse for concentration of measure.

There are already many excellent sources on concentration of measure; what makes ours different is the emphasis on information-theoretic aspects, both in the general theory and in applications. Comments, suggestions, thoughts are very welcome.

Advertisements

Information theory in economics, Part II: Robustness

Posted in Echoes of Cybernetics, Economics, Games and Decisions, Information Theory by mraginsky on July 20, 2012

As we have seen in Part I, the rational inattention framework of Christopher Sims aims to capture the best a rational agent can do when his capacity for processing information is limited. This rationally inattentive agent, however, has no reason to question his statistical model. In this post we will examine the robustness framework of Thomas Sargent, which deals with the issue of model uncertainty, but does not assume any capacity limitations.

(more…)

Information theory in economics, Part I: Rational inattention

Posted in Echoes of Cybernetics, Economics, Games and Decisions, Information Theory by mraginsky on June 1, 2012

Economic activity involves making decisions. In order to make decisions, agents need information. Thus, the problem of acquisition, transmission, and uses of information has been occupying the economists’ attention for some time now (there is even a whole subfield of “information economics”). It is not surprising, therefore, that information theory, the brainchild of Claude Shannon, would eventually make its way into economics. In this post and the one to follow, I will briefly describe two specific strands of information-theoretic work in economics: the rational inattention framework of Christopher Sims and the robustness ideas of Thomas Sargent. (As an interesting aside: Sims and Sargent have shared the 2011 Nobel Memorial Prize in Economics, although not directly for their information-theoretic work, but rather for their work related to causality.)

(more…)

Abstract snark

Posted in Academic Snark by mraginsky on May 31, 2012

The abstracts for these two papers (one a classic in robust control theory, the other in mathematical physics) are almost Zen in their simplicity and perfection:

J. C. Doyle, “Guaranteed margins for LQG regulators,” IEEE Transactions on Automatic Control, vol. 23, no. 4, pp. 756-757, 1978

Abstract: There are none.

J. E. Avron, L. Sadun, J. Segert and B. Simon, “Chern numbers, quaternions, and Berry’s phases
in Fermi systems,” Communications in Mathematical Physics, vol. 124, pp. 595-627, 1989

Abstract: Yes, but some parts are reasonably concrete.

P.S. (More or less) regular blogging to resume in 3, 2, 1 … .

Thomas M. Cover (1938-2012)

Posted in Information Theory by mraginsky on March 28, 2012

This morning, I received an email from Sergio Verdú with the extremely sad news that Tom Cover died on March 26th. He was 73.

Tom’s passing is a great loss. His work has left an indelible mark not only on information theory, but also on machine learning (for instance, his result with Peter Hart on the probability of error of the nearest-neighbor rule being at most twice the Bayes rate is one of the cornerstones of the theory of pattern recognition), statistics, probability, finance, etc. (here is Sergio’s wonderful presentation of Tom’s work across all these fields). His famous textbook, Elements of Information Theory, written with Joy Thomas, was the first one to feature not only the standard topics, such as source and channel coding, but also the more advanced material, such as multiterminal information theory, Kolmogorov complexity, connections between information theory and statistics, connections between information theory and gambling, or universal source coding, in a characteristically lucid manner that made them accessible to beginners. Everything Tom did was touched by elegance, simplicity, and grace. He will be missed, but never forgotten.

ITA 2012: some evanescent reflections

Posted in Conference Blogging, Information Theory, Papers and Preprints by mraginsky on February 19, 2012

Getting settled in at my new academic home, with all the attendant administrivia, has sucked up almost two months of missed blogging opportunities. So, here goes.
(more…)

Lower bounds for passive and active learning

Sasha Rakhlin and I will be presenting our paper “Lower bounds for passive and active learning” at this year’s NIPS, which will be taking place in Granada, Spain from December 12 to December 15. The proofs of our main results rely heavily on information-theoretic techniques, specifically the data processing inequality for {f}-divergences and a certain type of constant-weight binary codes.

(more…)

Updates! Get your updates here!

Just a couple of short items, while I catch my breath.

1. First of all, starting January 1, 2012 I will find myself amidst the lovely cornfields of Central Illinois, where I will be an assistant professor in the Department of Electrical and Computer Engineering at UIUC. This will be a homecoming of sorts, since I have spent three years there as a Beckman Fellow. My new home will be in the Coordinated Science Laboratory, where I will continue doing (and blogging about) the same things I do (and blog about).

2. Speaking of Central Illinois, last week I was at the Allerton Conference, where I had tried my best to preach Uncle Judea‘s gospel to anyone willing to listen information theorists and their fellow travelers. The paper, entitled “Directed information and Pearl’s causal calculus,” is now up on arxiv, and here is the abstract:

Probabilistic graphical models are a fundamental tool in statistics, machine learning, signal processing, and control. When such a model is defined on a directed acyclic graph (DAG), one can assign a partial ordering to the events occurring in the corresponding stochastic system. Based on the work of Judea Pearl and others, these DAG-based “causal factorizations” of joint probability measures have been used for characterization and inference of functional dependencies (causal links). This mostly expository paper focuses on several connections between Pearl’s formalism (and in particular his notion of “intervention”) and information-theoretic notions of causality and feedback (such as causal conditioning, directed stochastic kernels, and directed information). As an application, we show how conditional directed information can be used to develop an information-theoretic version of Pearl’s “back-door” criterion for identifiability of causal effects from passive observations. This suggests that the back-door criterion can be thought of as a causal analog of statistical sufficiency.

If you had seen my posts on stochastic kernels, directed information, and causal interventions, you will, more or less, know what to expect.

Incidentally, due to my forthcoming move to UIUC, this will be my last Allerton paper!

COST: NIPS 2011 Workshop on Computational Trade-offs in Statistical Learning

Posted in Optimization, Public Service Announcements, Statistical Learning and Inference by mraginsky on September 1, 2011

Alekh Agarwal and Sasha Rakhlin are organizing a workshop at this year’s NIPS. I’m on the program committee, so it is my duty (and distinct pleasure) to invite you all to peruse the full call for papers here, or at least to check out this key snippet:

We would like to welcome high-quality submissions on topics including but not limited to:

  • Fundamental statistical limits with bounded computation
  • Trade-offs between statistical accuracy and computational costs
  • Computation-preserving reductions between statistical problems
  • Algorithms to learn under budget constraints
  • Budget constraints on other resources (e.g. bounded memory)
  • Computationally aware approaches such as coarse-to-fine learning

Interesting submissions in other relevant topics not listed above are welcome too. Due to the time constraints, most accepted submissions will be presented as poster spotlights.

Oh, and did I mention that the workshop will take place in mid-December in Sierra Nevada, Spain?

ISIT 2011: favorite talks

Obligatory disclaimer: YMMV, “favorite” does not mean “best,” etc. etc.

  • Emmanuel Abbe and Andrew Barron, “Polar coding schemes for the AWGN channel” (pdf)
  • The problem of constructing polar codes for channels with continuous input and output alphabets can be reduced, in a certain sense, to the problem of constructing finitely supported approximations to capacity-achieving distributions. This work analyzes several such approximations for the AWGN channel. In particular, one approximation uses quantiles and approaches capacity at a rate that decays exponentially with support size. The proof of this fact uses a neat trick of upper-bounding the Kullback-Leibler divergence by the chi-square distance and then exploiting the law of large numbers.

  • Tom Cover, “On the St. Petersburg paradox”
  • A fitting topic, since this year’s ISIT took place in St. Petersburg! Tom has presented a reformulation of the problem underlying this (in)famous paradox in terms of finding the best allocation of initial capital so as to optimize various notions of relative wealth. This reformulation obviates the need for various extra assumptions, such as diminishing marginal returns (i.e., concave utilities), and thus provides a means of resolving the paradox from first principles.

  • Paul Cuff, Tom Cover, Gowtham Kumar, Lei Zhao, “A lattice of gambles”
  • There is a well-known correspondence between martingales and “fair” gambling systems. Paul and co-authors explore another correspondence, between fair gambles and Lorenz curves used in econometric modeling, to study certain stochastic orderings and transformations of martingales. There are nice links to the theory of majorization and, through that, to Blackwell’s framework for comparing statistical experiments in terms of their expected risks.

  • Ioanna Ioannou, Charalambos Charalambous, Sergey Loyka, “Outage probability under channel distribution uncertainty” (pdf; longer version: arxiv:1102.1103)
  • The outage probability of a general channel with stochastic fading is the probability that the conditional input-output mutual information given the fading state falls below the given rate. In this paper, it is assumed that the state distribution is not known exactly, but there is an upper bound on its divergence from some fixed “nominal” distribution (this model of statistical uncertainty has been used previously in the context of robust control). The variational representation of the divergence (as a Legendre-Fenchel transform of the moment-generating function) then allows for a clean asymptotic analysis of the outage probability.

  • Mohammad Naghshvar, Tara Javidi, “Performance bounds for active sequential hypothesis testing”
  • Mohammad and Tara show how dynamic programming techniques can be used to develop tight converse bounds for sequential hypothesis testing problems with feedback, in which it is possible to adaptively control the quality of the observation channel. This viewpoint is a lot cleaner and more conceptually straightforward than “classical” proofs based on martingales (à la Burnashev). This new technique is used to analyze asymptotically optimal strategies for sequential M-ary hypothesis testing, variable-length coding with feedback, and noisy dynamic search.

  • Chris Quinn, Negar Kiyavash, Todd Coleman, “Equivalence between minimal generative model graphs and directed information graphs” (pdf)
  • For networks of interacting discrete-time stochastic processes possessing a certain conditional independence structure (motivating example: discrete-time approximations of smooth dynamical systems), Chris, Negar and Todd show the equivalence between two types of graphical models for these networks: (1) generative models that are minimal in a certain “combinatorial” sense and (2) information-theoretic graphs, in which the edges are drawn based on directed information.

  • Ofer Shayevitz, “On Rényi measures and hypothesis testing” (long version: arxiv:1012.4401)
  • Ofer obtained a new variational characterization of Rényi entropy and divergence that considerably simplifies their analysis, in many cases completely replacing delicate arguments based on Taylor expansions with purely information-theoretic proofs. He also develops a new operational characterization of these information measures in terms of distributed composite hypothesis testing.