ECE 299: regression with quadratic loss; stochastic simulation via Rademacher bootstrap
I gave the last lecture earlier today, wrapping up the semester. Here are the notes from the last two weeks:
- Regression with quadratic loss, mostly in reproducing kernel Hilbert spaces, with and without regularization.
- Case study: stochastic simulation via Rademacher bootstrap, where I discuss the work of Vladimir Koltchinskii et al. on efficient stopping algorithms for Monte Carlo stochastic simulation. The idea is to keep sampling until the empirical Rademacher average falls below a given threshold. Once that happens, you stop and compute a minimizer of the empirical risk. The work of Koltchinskii et al. was in turn inspired by the ideas of Mathukumalli Vidyasagar on the use of statistical learning theory in randomized algorithms for robust controller synthesis.
Monday’s lecture was on stochastic gradient descent as an alternative to batch empirical risk minimization. I will post the notes soon.