The Information Structuralist

ECE 299: learning-theoretic bounds for vector quantizers; binary classification

Posted in Corrupting the Young, Statistical Learning and Inference by mraginsky on March 29, 2011

More learning-theoretic goodness:

  • Case study: empirical quantizer design, where I discuss beautiful work by Tamás Linder et al. that uses VC theory to bound the performance of empirically designed vector quantizers (which is engineering jargon for consistency of the method of k-means).
  • Binary classification: from the classic bounds for linear and generalized linear discriminant rules to modern techniques based on surrogate losses; voting methods; kernel machines; Convex Risk Minimization.
Advertisements
Tagged with:

Divergence in everything: erasure divergence and concentration inequalities

Posted in Information Theory, Probability, Statistical Learning and Inference by mraginsky on March 18, 2011

It’s that time again, the time to savor the dreamy delights of divergence!

(image yoinked from Sergio Verdú‘s 2007 Shannon Lecture slides)

In this post, we will look at a powerful information-theoretic method for deriving concentration-of-measure inequalities (i.e., tail bounds) for general functions of independent random variables.

(more…)