We address the challenge of zeroth-order online convex optimization where the objective function’s gradient exhibits sparsity, meaning that only a small number of dimensions possess nonzero gradients. Our goal is to leverage this sparsity to obtain useful estimates of the objective function’s gradient even when only a limited number of function samples are available. Our motivation stems from the optimization of large-scale queueing networks that process time-sensitive jobs. In such systems, a job may pass through many queues in sequence before producing an output, and the service time at each queue depends on the resources allocated to it. Since resources are costly, the end-to-end latency for jobs must be balanced against the total resource cost. Although the number of queues can be large, the latency function typically reacts to resource changes in only a few of them, rendering the gradient sparse. We tackle this problem by introducing the Compressive Online Gradient Optimization (CONGO) framework, which enables compressive sensing methods—previously applied to stochastic optimization—to achieve regret bounds with optimal dependence on the time horizon, without the full problem dimension appearing in the bound. For specific algorithms, we reduce the number of samples required per gradient estimate to scale with the gradient’s sparsity rather than its full dimensionality. Numerical simulations and real-world microservices benchmarks demonstrate CONGO’s superiority over gradient descent approaches that do not account for sparsity. Joint work with Jeremy Carleton, Prathik Vijaykumar, Divyanshu Saxena, Dheeraj Narasimha, and Aditya Akella.
Srinivas Shakkottai received his Ph.D. in Electrical and Computer Engineering from the University of Illinois at Urbana–Champaign in 2007, followed by a postdoctoral position in Management Science and Engineering at Stanford University. He joined Texas A&M University in 2008, where he is currently a Professor in the Department of Electrical and Computer Engineering and holds a courtesy appointment in the Department of Computer Science and Engineering. His research interests include wireless networks, reinforcement learning, caching and content distribution, multi-agent learning and game theory, networked markets, and data analytics. He co-directs the Learning and Emerging Network Systems (LENS) Laboratory and the RELLIS Spectrum Innovation Laboratory (RSIL) at Texas A&M University. He has served as an Associate Editor for the IEEE/ACM Transactions on Networking and the IEEE Transactions on Wireless Communications. Dr. Shakkottai is the recipient of the Defense Threat Reduction Agency (DTRA) Young Investigator Award and the National Science Foundation (NSF) CAREER Award, as well as research awards from Cisco and Google. His papers have received honors at venues such as ACM MobiHoc, ACM eEnergy, and the International Conference on Learning Representations (ICLR). At Texas A&M, he has received several distinctions, including the Outstanding Professor Award, the Select Young Faculty Fellowship, the Engineering Genesis Award (twice), and the Dean of Engineering Excellence Award.