Exact Analysis of Generalization Error in Generalized Linear Models
# 111
Abstract
GLMs are a powerful class of models applied ubiquitously in machine learning and signal processing applications. Learning these models often involves iteratively solving non-convex optimization problems. I will present an exact statistical analysis of learning in these models in a high dimensional setting. This framework is built on new developments in Random Matrix Theory, and does not rely on convexity. Using this framework, we can now analyze the effect of several design choices on the generalization error of the learned model. Example design choices include loss function, regularization, feature covariance, train-test mismatch.
Parthe Pandit, UCSD
Parthe Pandit is a Simons postdoctoral fellow at the Halıcıoğlu Data Science Institute at UC San Diego. He obtained a PhD in ECE and MS in Statistics both from UCLA, and a dual degree in EE from IIT Bombay. He is a recipient of the Jack K Wolf student paper award at ISIT 2019. He has also been a research intern at Amazon and Citadel LLC.