Sasha Rakhlin and I will be presenting our paper “Lower bounds for passive and active learning” at this year’s NIPS, which will be taking place in Granada, Spain from December 12 to December 15. The proofs of our main results rely heavily on information-theoretic techniques, specifically the data processing inequality for -divergences and a certain type of constant-weight binary codes.
The paper deals with the well-known binary classification problem: We have two correlated random variables and , where traditionally is referred to as an instance or a feature and as the (binary) label. The joint distribution of and is unknown, and the goal is to learn a classifier, i.e., a mapping , whose probability of error is as small as possible. More formally, the accuracy of a classifier is measured by its excess risk w.r.t. :
It is known that the infimum on the right-hand side is achieved by the Bayes classifier, which has the following form: Given , let be the so-called regression function. Then the Bayes classifier is
In other words, the optimum strategy is to classify a given as a if and only if the conditional probability (under ) that given is at least half.
The classical set-up for this problem assumes that the learning agent has access to a large number of i.i.d. samples from , and can use these samples to select a candidate classifier from some fixed class . The goal of learning is to ensure that, with high probability, the candidate classifier is as close to as possible. This is the passive learning model, since the learning agent has no control over the process of collecting the data. By contrast, under the active model, at each time step the learner selects the new feature based on all the past data , and then requests the corresponding label from an “oracle.” When a suitably large number of feature-label pairs has been collected, the learner outputs a candidate classifier , whose performance is measured, as before, by the excess risk.
Remark 1 This is not the only active learning model out there. Another widely studied setting, which can be more accurately called “selective sampling,” involves i.i.d. training samples just like in the passive case, but the learner accesses them sequentially and has the freedom to decide, for each of the examples, whether or not he wants the corresponding label to be revealed. In this set-up, unlabeled features are essentially free; only the number of label requests matters.
Clearly, the active learning model is stronger than the passive model. But how much do we gain by allowing active learning? One way to quantify it is to look at sample complexity of the underlying learning problem: given some class of possible distributions of and a class of candidate classifiers, what is the minimum number of feature-label pairs that the learner needs to see in order to achieve a given level of excess risk with a given probability of success? Here the probability of success is computed w.r.t. the probability measure that governs the data-gathering process, so in the passive case it’s just , while in the active case it is given by interconnecting the agent’s (possibly stochastic) rules for selecting on the basis of for each with the conditional probability distribution , while respecting the causal ordering
The sample complexity is an “information-theoretic” limit in the sense that no strategy, no matter how clever or computationally powerful, can make do with fewer training samples.
To date, there has been quite a great deal of work on developing and analyzing algorithms for active learning — see, e.g., the papers by Steve Hanneke (see also the preprint) and Vladimir Koltchinskii and references therein. A theoretical analysis of any given algorithm gives us upper bounds on the sample complexity, so if we find an upper bound for active learning that is much lower than existing tight lower bounds for passive learning, then we can claim that active learning indeed does help. Both Hanneke and Koltchinskii show that the sample complexity of a number of very natural schemes for active learning can be upper-bounded in terms of what Hanneke has called the disagreement coefficient, which is defined as follows. Consider the underlying distribution of and a class of candidate classifiers. For each define the -minimal set
where is the Bayes classifier corresponding to . In words, consists of all classifiers that disagree with the Bayes classifier with -probability at most . Next, define the disagreement set
which measures the “size” of the disagreement set (relative to the “radius” of ).
Remark 2 The function defined in (1) has been used by Alexander in his work on deviation inequalities for empirical processes, and more recently by Giné and Koltchinskii in their work on passive learning. The latter authors have termed the function the Alexander capacity.
The disagreement coefficient for the pair is then defined as
Hanneke’s paper contains several examples of how the disagreement coefficient may be computed (or bounded) for specific learning problems. Moreover, if is finite, then a natural active learning scheme can be based on the observation that only the features in the disagreement set are “informative.” Of course, the disagreement set depends on the unknown distribution, but, as Koltchinskii has shown, it is possible to estimate this set from data. In particular, if the regression function corresponding to the distribution has margin , i.e., if for all , then the algorithm proposed by Koltchinskii needs on the order of
examples in order to attain an excess risk of at most with probability at least , where is the VC dimension of the class .
What Sasha and I have proved is a minimax lower bound on the sample complexity of active learning under the margin assumption. Roughly speaking, we have shown that, for any admissible capacity function and any sufficiently small , and , there exists a choice of the pair that has margin , whose Alexander capacity at is equal to (it may be different for other values ), and any active learning algorithm that attains excess risk of at most with -probability at least needs at least order of
Note that, in contrast with Koltchinskii’s upper bound (2) that involves the supremum of Alexander’s capacity, our lower bound actually depends on the value of the capacity at the given . Thus, we can quantify the relative advantage of active learning over passive learning in terms of the rate of growth of as a function of . For instance, if , then active learning has very little advantage over passive learning, since both will require at least examples. On the other hand, if is close to , then the best passive learner will need a factor of more examples than the best active learner.
As I’ve mentioned earlier, our proof of the bounds (3) and (4) makes essential use of the data processing inequality for -divergences (in the paper, we use the term “-divergence,” since is reserved for a generic classifier). This technique is much stronger than any method based on Fano’s inequality, since the latter generally cannot give the term proportional to and, more importantly, gives very loose bounds for the active case. This is where the freedom of choosing a suitable -divergence comes to save the day, since it allows us to decouple the conditional information gain at each time step from the variables that describe the “global” behavior of the learning algorithm (e.g., the total number of times a given feature point has been queried by the learner). It should be pointed out that we were definitely not the first to use the data processing inequality for -divergences in a statistical context. It was used implicitly by Adityanand Guntuboyina and explicitly by Alexander Gushchin (in a really obscure paper that you’ve probably never heard of) to improve upon Fano’s method for deriving minimax lower bounds on the risk of passive statistical estimation procedures.