## Typical, just typical

In the spirit of shameless self-promotion, I would like to announce a new preprint (preliminary version was presented in July at ISIT 2010 in Austin, TX):

Maxim Raginsky, “Empirical processes, typical sequences and coordinated actions in standard Borel spaces”, arXiv:1009.0282, submitted to *IEEE Transactions on Information Theory*

Abstract:This paper proposes a new notion of typical sequences on a wide class of abstract alphabets (so-called standard Borel spaces), which is based on approximations of memoryless sources by empirical distributions uniformly over a class of measurable “test functions.” In the finite-alphabet case, we can take all uniformly bounded functions and recover the usual notion of strong typicality (or typicality under the total variation distance). For a general alphabet, however, this function class turns out to be too large, and must be restricted. With this in mind, we define typicality with respect to any Glivenko–Cantelli function class (i.e., a function class that admits a Uniform Law of Large Numbers) and demonstrate its power by giving simple derivations of the fundamental limits on the achievable rates in several source coding scenarios, in which the relevant operational criteria pertain to reproducing empirical averages of a general-alphabet stationary memoryless source with respect to a suitable function class.

The notion of a *typical sequence* has been one of the pillars of information theory ever since the original paper of Shannon. There are many equivalent notions of typicality, but I am going to use one that’s often referred to as *strong typicality*. Namely, consider a finite set and a probability distribution on it. Given some , an -tuple is said to be –*typical* with respect to if

We can write this more succinctly if we define the *empirical distribution* by

and recall the definition of the total variation distance between probability distributions and on ,

Then we can say that is -typical w.r.t. if

One of the salient facts about typicality is the following: if is an infinite sequence of i.i.d. draws from , then it follows from the Law of Large Numbers that

If you read any textbook on information theory (say, Cover and Thomas), you will find that the proof of almost every coding theorem makes use of this fact (and of its various extensions to joint and conditional distributions).

Recent work by Paul Cuff, Haim Permuter and Tom Cover on coordination capacity uses strong typicality as a basic tool for addressing the following general problem: Consider a network consisting of agents, such that agents and can communicate at some specified rate , for each . Suppose that the th agent selects an *action* (where is a finite alphabet) based on the information it receives from other agents, as well as on some common randomness shared by all the agents. What is the set of all achievable joint distributions of the actions over the network? This problem of *coordination via communication* arises not only in source coding proper, but also in such contexts as decision-making and control in multiagent systems, or network security.

There is one sense in which this problem is trivial. Suppose that there is no communication among the agents, but they all observe a common random element drawn from a probability space . Then they can generate absolutely *any* joint distribution over their respective actions simply by agreeing beforehand on the measurable functions , , such that

for every possible assignment . Once they observe , they do not need to communicate at all, since each agent simply applies her own function.

However, the situation changes dramatically as soon as some constraints are imposed. For instance, what happens when some of the agents are not free to select their actions, but receive them as random assignments? Then the question becomes: given these “boundary conditions”, what are all the possible *conditional* distributions of the actions for the remaining agents that can be generated given the rate constraints ?

Let me illustrate this by means of a simple example. Consider a two-node network shown below:

We have two nodes, and . Node is assigned an -tuple of actions drawn i.i.d. from some distribution on a finite set . Node must generate an -tuple of actions in another finite set based on information it receives from Node . Now suppose that there is an external entity (the Observer) that can observe and . The Observer has some joint distribution in mind (with the given -marginal ) over the product space , and the nodes know this. The problem then is this: find the smallest rate (in bits per action) at which Node has to communicate with Node in order for both of them to convince the Observer that the relative frequencies of their joint actions are consistent with . In symbols, we wish to determine the smallest rate , such that for every large enough we can find two mappings and , such that with , we can guarantee

where is some small error level that will satisfy the Observer.

The result is (roughly) as follows:

Theorem 1 (Cuff, Permuter, Cover)Any rate suffices, and no rate will work. Here, is the mutual information between and when they have the joint distribution .

I will only sketch the proof of achievability. A more-or-less standard argument shows that, for any and any sufficiently large , there exist a finite set with and two mappings, and (with ), such that, if is i.i.d. according to and , then the -tuple of pairs is -typical w.r.t. with high probability. The rest is a matter of choosing and so large that this probability is at least . Then it’s not terribly hard to show that

Now, all of this is fine when the sets and are finite. But what if we are interested in generating continuous-valued actions? It then turns out that we cannot even meaningfully define an -typical sequence! Here’s why. Let us recall an equivalent definition of the total variation distance (1) that works in an arbitrary measurable space , namely

Now suppose that is an empirical distribution induced by some -tuple , i.e., for any we have

while is a probability measure that puts zero mass on any singleton, for every (this will be the case if is the distribution of a multivariate Gaussian random variable). Now take to consist of all distinct elements of . Then

and as a result for any choice of . EPIC FAIL!

Of course, we don’t really *have to* use typicality arguments when working with continuous alphabets. There are plenty of other useful tools, such as ergodic theory or theory of large deviations. However, there is something to be said for the inherent simplicity and transparency of the typicality-based proofs — when done right, they rely on nothing more than a combination of the Law of Large Numbers and the triangle inequality. So why can’t we have something like that for abstract alphabets? The answer is — yes we can!

Let’s consider the Euclidean case , for simplicity. The first thing to notice is that the definition (3) really asks for too much — we demand that the relative frequencies of *all* measurable sets, no matter how bizarre-looking, converge to their true probabilities *uniformly*. But, as we have just seen, that just cannot happen. However, if we scale our ambitions back a bit, we may yet have the convergence we want. All we have to do is restrict our attention to “nice” families of sets, such as balls, halfspaces, polyhedra, or the elements of your own favorite Vapnik–Chervonenkis class. So the idea is to pick a suitable infinite collection of measurable subsets of , such that

for any probability measure on . Then we can say that an -tuple is -typical w.r.t. if

This approach actually works, provided is a sufficiently “regular” collection of sets (and I gave several examples above).

More generally, we can start with yet another (!) equivalent definition of the TV distance,

In other words, the total variation distance between and is equal to the largest absolute difference between the expectations, under and , of any measurable function bounded by unity. Once again, this class of functions is way too large. Thus, without further ado, we simply consider a smaller class that *would* permit such approximations. Luckily, it so happens that function classes just of this type are extensively used in modern mathematical statistics, where they are known under the name of Glivenko–Cantelli classes. Leaving out the technical details, I can only say that I can now define typicality w.r.t. any Glivenko–Cantelli function class, so that several cool things happen:

- A convergence statement like (2) continues to hold.
- When the alphabet is finite, I can just take all functions and get back the usual notion of strong typicality.
- When the alphabet is a complete separable metric space, I can take as my reference class the set of all functions that are 1-Lipschitz and bounded by 1, in which case my notion of typicality is compatible with the topology of the weak convergence of probability measures.
- I can now have achievability proofs that really do consist of little more than the Law of Large Numbers (a uniform version thereof, as in (4)) and the triangle inequality. In my paper, I prove a general-alphabet variant of the basic two-node result that I have discussed earlier, as well as address a more difficult set-up that involves side information at the decoder, in the spirit of Wyner and Ziv.

Why is this at all useful or interesting, you ask? Well, first of all, we can now have our cake and eat it too, where by “cake” I mean “typical sequences over abstract alphabets” and by “eat” I mean “use them to prove cool new coding theorems.” Secondly, there are numerous problems pertaining to distributed control, learning, and sensing that can benefit from this new viewpoint. For example, we may consider the problem of learning to predict the nodes’ future behavior by observing their actions. In this setting, each in our reference class may correspond to the loss we incur when we use a particular candidate predictor, and we can now quantify the minimal amount of information that needs to be exchanged among the nodes so that we can accurately estimate the losses of all our candidate predictors simply by looking at their sample averages (as is done, for instance, in a 1996 paper by Kevin Buescher and P. R. Kumar on learning by canonical smooth estimation). Moreover, Glivenko–Cantelli classes of sets and functions play quite a prominent role in the theory of statistical learning, where one of the central questions is one of *rates of convergence* — i.e., how quickly can a learning agent learn as a function of the available amount of training data? Hopefully, with this new formalism we can now begin to answer questions like “What is the minimal amount of predictively relevant information contained in a sequence of training samples?”

Autumn travels « The Information Structuralistsaid, on September 27, 2010 at 4:52 pm[…] I went to Washington DC to visit University of Maryland at College Park and to present my work on empirical processes and typical sequences at their Information and Coding Theory Seminar. A scientist’s dream — two hours in […]