Heterogeneity-Aware Algorithms for Federated Optimization

# 131


The future of machine learning lies in moving both data collection as well as model training to the edge. The emerging area of federated learning seeks to achieve this goal by orchestrating distributed model training using a large number of resource-constrained mobile devices that collect data from their environment. Due to limited communication capabilities as well as privacy concerns, the data collected by these devices cannot be sent to the cloud for centralized processing. Instead, the nodes perform local training updates and only send the resulting model to the cloud. A key aspect that sets federated learning apart from data-center-based distributed training is the inherent data, communication, and computation heterogeneity across the edge clients. Allowing heterogeneity is essential for the system to be scalable and flexible. However, heterogeneity can cause convergence slowdown and inconsistency problems for federated optimization algorithms. In this talk, I will present our recent work on algorithms for tackling various types of heterogeneity in federated optimization.

Gauri Joshi, CMU

Gauri Joshi is an associate professor in the ECE department at Carnegie Mellon University. Gauri completed her Ph.D. from MIT EECS and completed her undergrad in Electrical Engineering from IIT Bombay. Her current research is on designing algorithms for federated learning, distributed optimization, and parallel computing. Her awards and honors include being named as one of MIT Technology Review's 35 Innovators under 35 (2022), the NSF CAREER Award (2021), the ACM Sigmetrics Best Paper Award (2020), Best Thesis Prize in Computer science at MIT (2012), and Institute Gold Medal of IIT Bombay (2010).