Deep learning remains an art with several heuristics that do not always translate across application domains. Kernel machines, a classical model in ML, have received renewed attention following the discovery of the Neural Tangent Kernel and its equivalence to wide neural networks. I will present 2 results which show the promise of kernel machines for modern large scale applications. 1. Data-dependent supervised kernels: https://www.science.org/stoken/author-tokens/ST-1738/full 2. Fast scalable training algorithms for kernel machines: https://arxiv.org/abs/2411.166588
Parthe Pandit is the Thakur Family Chair Assistant Professor at the Center for Machine Intelligence and Data Science at IIT Bombay. He was a Simons Postdoctoral Fellow at UC San Diego. He obtained his PhD from UCLA and his undergraduate education from IIT Bombay. In 2024, he was awarded the AI2050 Early Career Fellowship by Schmidt Sciences. He has also been the recipient of the 2019 Jack K Wolf Student paper award by the IEEE Information Theory Society.