Robustness & Personalization in Federated Learning
Federated learning is a technique for training machine learning models across multiple edge devices (clients)without sharing private data. Federations tend to be diverse, leading to non-IID data across clients, and often include clients whose data can be considered outliers. Robust and personalized federated learning approaches aim to address this problem of data heterogeneity across clients. We present a class of methods for robust, personalized federated learning, called Fed+, that unifies many federated learning algorithms. We also provide convergence guarantees for Fed+ without making any statistical assumption on the degree of heterogeneity of local data across clients.
Currently, I'm a research scientist at IBM Research, Singapore. Recently, I finished my Ph.D. from the Department of Computer Science & Automation at IISc, Bangalore. I also got my M.E. degree (gold medalist) from the same institute. Broadly, my research focuses on optimization aspects of machine learning. My current research interest includes federated learning, neural architecture search, primal-dual splitting & momentum-based accelerated algorithms for optimization.