Coded GradientAggregation

# 66






Abstract

The increasing amount of data being generated at the edge nodes and the need forprivacy necessitates learning at the edge, wherein computations are performedat edge devices and are communicated to a central node for updating the model.Typically, the edge nodes have power constraints and may be available onlyintermittently. Often, there are helper nodes present in the network which aidthe edge nodes to communicate with the server. We consider this hierarchical networkmodel in which the edge nodes communicate the local gradient to the helpernodes which relay these messages to the central node after possibleaggregation. There are two phases of communication, one between the edge nodesand the helper nodes and the second between the helper nodes and the centralnode. It is observed that codes like repetition codes, maximum distance-separable(MDS) codes and pyramid codes can be used to control the communication costs atthe edge nodes and the helper nodes. In this talk, we explore these schemes indetail and discuss some open challenges.

Anoop Thomas, IIT Bhubaneswar

Anoop Thomas received the B.Tech. degree in electronics andtelecommunication engineering from the College of Engineering, KeralaUniversity, Trivandrum, in 2008, the M.E. degree from the Department ofElectrical Communication Engineering, Indian Institute of Science, Bangalore, in2013, and the Ph.D. degree from the Department of Electrical CommunicationEngineering, Indian Institute of Science, in 2018. He is currently an AssistantProfessor with the School of Electrical Sciences, IIT Bhubaneswar. His primaryresearch interests include network coding, index coding, coded caching, andgradient coding.