Research Highlights

Akshatha Jagdish, Ph.D Fellow

Micro-robotics is an emerging field of research where the focus areas are physical actuation, system control, materials, sensor research and so on. It is an interdisciplinary field where researchers from communities such as physics, chemistry, engineering (biotech, mechanical, comp science) have a role to play. The goal is to contribute to the development of a small-scale robotic system that can see application in the fields of targeted medicine, environment monitoring, water body treatment, small scale manufacturing, and so on.

A micro-robot is a controllable machine of micron scale with application specific capabilities in addition to generic functions such as motion, sensing and control mechanism. Scaling robotic systems to micro-scale forces us to focus on physical parameters such as surface tension, adhesion and drag instead of mass and inertia. The actuation machines must now be similar to cilia, pseudopodia, flagella as observed in the nature. There has been research in development of actuation mechanisms at micron scale such as magnetically actuated rigid helices, cilia and sperm-mimetic synthetic tails, chemically powered spherical particles and cannons, synthetically engineered bacteria, muscle cells, etc.

In our work, we are studying the effect of external control on active particle behaviour. In this direction, we have done a few simulations to model the efficiency of a swarm in reaching a defined target region. We have studied the swarm behaviour with and without controlling the mean location of the swarm with the help of external control like magnetic field. We designed a hardware setup to study the effect of external control on active particles. The setup uses a speaker to emulate Brownian motion of small spherical particles on a flat plate. The effect of magnetic field has been emulated by tilting the plane of the base-plate so as to move the swarm of vibrating particles towards a target region.

Krishna Chaythanya, Ph.D Fellow

Inference problems in fault detection of machines are of current importance in the context of industry 4.0. We consider a networked factory where sensors used to monitor the parameters of rotating machines communicate their measurements to a central controller over a wireless network that is prone to congestion and packet drops. The controller then makes decisions on the health of the machines. Since manufacturing processes are usually highly coupled, the failure of one component may bring the entire assembly line to a halt and incur high losses. Early detection of trends to failure before the actual failure happens allows for the component to be replaced/repaired during regular maintenance schedules. In our work, we attempt to solve problems related to the detection of failures in rotating machines in the setting described above. We want to design algorithms that detect incipient failures with minimum delay and perform well in a decentralized decision making setting. We model the transition of a rotating machine to a faulty state as a change point problem and design optimal procedures to detect the change. Once the theory for a single process’s change is in place, we plan to study a multiplicity of such change detection problems in a networked factory setting described above where we will address the network/signal processing trade-off.

In our work, we study the mechanical vibrations from rolling element bearings since faults in bearings is the most common cause of failures in rotating machines. We consider various parametric models that describe the vibration signals from the bearings. The parameters of the model undergo a change at a random time when the bearing transits to a faulty state from a normal state. We develop sequential parametric change detection tests for each of these models which are optimal in the sense of minimizing the detection delay subject to a false alarm constraint. We construct sufficient statistics and analyze the performance of the tests by a simulation study. We also compare the performance of these tests against existing methods in literature that use other metrics such as kurtosis and mahalanobis distance using real world datasets from bearings.

Nihesh Rathod, Ph.D Fellow

Swabs to Labs: A framework designed to aid officials distribute the swab samples to testing labs efficiently to speed up the report generation time while meeting the lab test capacities, minimising the distribution/travel time of samples etc.

City-Scale Agent-Based Simulators for the Study of Non-Pharmaceutical Interventions
in the Context of the COVID-19 Epidemic: An agent-based simulator to study the impact of various non- pharmaceutical interventions in the context of the ongoing COVID-19 pandemic. Some of the key features of the simulator include age-stratified interaction that captures heterogeneity in interaction among people in a given interaction space, the ability to implement various interventions such as soft ward containment, phased opening of workplaces and community spaces, a broad class of contact tracing based testing and case isolation protocols, etc.

COVID-19 Epidemic: A Workplace Readiness Indicator: The COVID-19 Readiness Indicator is an online assessment tool that enables organisations to understand their current level of preparedness and key risk areas. It also helps in planning and in establishing pandemic-specific policies, procedures, and necessary management practices.

Publications:

  1. Coverage Estimation in Outdoor Heterogeneous Propagation Environments.            Nihesh Rathod, Renu Subramanian, Rajesh Sundaresan
  2. Relay Placement Algorithms for Communication in a Heterogeneous Propagation
    Environment.                                                                                                                           Nihesh Rathod, Rajesh Sundaresan
  3. Data-driven and GIS-based Coverage Estimation in a Heterogeneous Propagation
    Environment.                                                                                                                           Nihesh Rathod, Renu Subramanian, Rajesh Sundaresan

Raghuram Bharadwaj Diddigi, Ph.D Fellow

We propose a convergent on-line, off-policy Temporal Difference (TD) Reinforcement Learning (RL) algorithm under linear function approximation. The two important problems in RL are Prediction and Control. The prediction problem deals with computing the value function of a given policy. In a discounted reward setting, value function refers to the total expected discounted reward obtained by following the given policy. The control problem refers to computing the optimal policy, i.e., the policy that maximizes the total expected discounted reward. Off-policy refers to the setting where the data for evaluating a target
policy comes from a different policy. For example, in games, say a practitioner would like to evaluate a (target) strategy. However, the data available might be from a player following a different strategy. The question that arises in this scenario is whether the practitioner can make use of this data and still evaluate the target strategy. It has been well established in the literature that standard TD algorithms under linear function approximation may diverge under this off-policy setting. We make use of the idea of regularization to alleviate the problem of divergence and prove the convergence of our proposed algorithm. We also show the empirical performance of our algorithm on standard benchmark off-policy divergent
RL environments.

One of the important practical applications of RL is Smart Grids. Smart Grid is a concept of developing a power grid that can intelligently make use of electricity. We consider the problem of energy management in microgrid networks. A microgrid is capable of generating power from a renewable resource and is responsible for handling the demands of its dedicated customers. Owing to the variable nature of renewable generation and the demands of the customers, it becomes imperative that each microgrid optimally manages its energy. This involves intelligently scheduling the demands at the customer side, selling (when there is a surplus), and buying (when there is a deficit) the power from its neighboring microgrids depending on its current and future needs. In this work, we formulate the problems of demand and battery scheduling, energy trading, and dynamic pricing (where we allow the microgrids to decide the price of the transaction depending on their current configuration of demand and renewable energy) in the framework of stochastic games. Subsequently, we propose a novel Deep Q-Network (DQN) approach to solve these problems by creating two separate neural networks (for handling the tasks of stochastic job scheduling as well as energy trading) both working as ingredients to the same stochastic game.

Publications:

  1. A Convergent Off-Policy Temporal Difference Algorithm.                                                        R. B. Diddigi, C. Kamanchi, and S. Bhatnagar
  2. Stochastic Game Frameworks for Efficient Energy Management in Microgrid Networks. S. Nayak, C. A. Ekbote, A. P. S. Chauhan, R. B. Diddigi, P. Ray, A. Sikdar, S. K. R. Danda, and S. Bhatnagar

Rooji Jinan, Ph.D Fellow

In cyber physical systems, we often come across sensor networks that record observations about a physical process, which is transmitted to a remote server for further processing and
inference. Some applications can be found in health care monitoring, autonomous navigation, tactical surveillance, automation in industries etc. Often, the remote server is required to perform decision making in real time based on the inference drawn from the received signals. In such applications, the precision of the received signals is of importance along with the timeliness in reception. We study such a setting where a physical process is being tracked by a remotely located receiver based on the observations send by a sensor via a bandwidth limited communication channel. The bandwidth constraints of the channel decides the number of bits that can be transmitted per channel use. Therefore, one has to resort to multiple channel uses for high precision representation of the observed signals. However, this affects the timeliness of the sensor updates, and this points to a natural trade-off between precision and timeliness.

In our work, we model the observed physical process as a first order autoregressive process. This process is being observed by a sender which samples the process periodically. We assume that the sampling is slow, that is, there are multiple transmission opportunities before a new sample is observed. These samples are to be encoded and communicated to the remote receiver across a channel that can transmit only finitely many bits per unit time. The objective of the receiver is to provide a realtime estimate of the observed process with minimum time averaged mean square error. Our objective is to design an optimal encoder-decoder scheme for this setting. We propose an encoder-decoder scheme where we send information about the latest observed sample at each transmission opportunity. As we have multiple transmission opportunities to send information about each sample, we can either choose to encode this information in to a single codeword using the entire available communication budget or we can choose to encode the information in to smaller chunks and send these updates sequentially. Our study try to answer what is the optimal update to be sent, the optimal frequency at which these updates must be send in each sampling interval and the optimal decoding strategy.

Publications:

  1. Tracking an auto-regressive process with limited communication.                                      Rooji Jinan, Parimal Parag, Himanshu Tyagi
  2. Tracking an Auto-Regressive Process with Limited Communication per Unit Time.     Rooji Jinan, Parimal Parag, Himanshu Tyagi

Sarath A.Y, Ph.D Fellow

Our goal is to understand the large time behaviour and metastability in networked
systems such as WiFi networks, cloud computing systems, societal networks with rational agents, etc. We consider a mean- eld interacting particle system with N particles. Each particle has a state associated with it which evolves in a Markovian fashion where the rates of transition depend on the other particles only through the empirical measure of the states of all the particles. Such a particle system is useful in modelling many networked systems. It turns out that the performance of these systems can be understood from the stationary behaviour of the empirical measure process, and our research focuses on studying the same. We are particularly interested in studying metastable phenomena such (i) the mean time for the system to be close to stationarity, (ii) the mean exit time from an operating point, etc. We can then use insights from this study to drive better design of such systems. We mainly use the theory of large deviations as a tool to study such questions.

Publications:

  1. Large deviations of the invariant measure of mean-field interacting particle systems.      Sarath A.Y, Rajesh Sundaresan
  2. Large deviations of mean-field interacting particle systems in a fast varying environment.                                                                                                                            Sarath A.Y, Rajesh Sundaresan
  3. Large time behaviour and the second eigenvalue problem for finite state mean-field interacting particle systems.                                                                                                 Sarath A.Y, Rajesh Sundaresan
  4. City-Scale Agent-Based Simulators for the Study of Non-Pharmaceutical Interventions in the Context of the COVID-19 Epidemic (with IISc-TIFR Agent-Based Simulation Team)

Vinay Kumar. B.R, Ph.D Fellow

IoT is the new emerging technology in the networking market. It involves different types of physical devices – like sensors, actuators, routers, mobiles etc.- communicating with each other over a network. Broadcast mechanisms are crucial in such ad-hoc networks to disburse key network related information. A typical example is over-the-air (OTA) programming of IoT nodes. As part of my PhD, I am looking at a novel algorithm for broadcast which is light-weight and energy efficient. This is described here.

Consider a connected graph with a particular node designated as the source. The source
node has k message packets which need to be broadcast in the network. The source encodes the k message packets into n (> k) coded packets using a Maximum Distance Separable (MDS) code. This ensures that any node that receives at least k out of the n packets can retrieve the original k message packets that the source intended to convey. The source then transmits all the n packets to its neighbours. Every other node in the network follows a probabilistic forwarding mechanism: a node on reception of a new packet forwards it to its neighbours with some probability p and does nothing with probability 1-p. We are interested in finding the minimum value of the forwarding probability p for which a large fraction of the nodes are able to obtain the information from the source. Call this p*. The performance metric of interest is the expected total number of transmissions when the forwarding is done using this minimum value of the forwarding probability p*.

Publications:

  1. Probabilistic forwarding of coded packets on networks.                                                          B. R. Vinay Kumar and N. Kashyap

Gururaj K, M.Tech Fellow

Network indexed data often contains not only connectivity information but also observable
node and/or edge covariates. Capturing the unknown relation between node covariates and the presence of edges is therefore very useful and is the primary motive of this work. We propose a simple but general modeling framework that combines the strengths of Stochastic Block Models and Restricted Boltzmann Machines. This captures the relation between node covariates and community structure, without making an explicit assumption on the causal direction between these two quantities, achieving our aim.

We propose two simple and flexible generative models for modeling Attributed Networks
with non-overlapping and mixed membership communities (RB-SBM and RB-MMSBM
resp) by combining variants of RBMs with SBMs. We derive efficient inference methods for the proposed models. Each iteration of the Inference algorithm for RB-SBM runs in linear time with respect to the number of nodes and edges thus making it scalable. We empirically validate the proposed models on the task of community detection & link prediction and demonstrate through series of quantitative experiments and qualitative case studies that they can provide interpretable insights about the data. Our approach (RB-SBM model) outperforms existing approaches on Cora and Citeseer networks in terms of NMI score with respect to known ground truth community memberships. RB-MMSBM model recovers more meaningful communities compared to baseline MMSBM model on Lazega Lawyers dataset. RB-SBM model achieves comparable results in link prediction task on Cora and Citeseer
networks when compared with deep learning approaches such as GAE, VGAE despite
having significantly fewer parameters

Mahak Pancholi, M.Tech Fellow

Privacy Preserving Machine Learning (PPML) allows interested parties to perform machine learning (ML) tasks collaboratively, and at the same time, ensures the privacy of individual private data. Privacy requirements might be inherent to the field of application, as is in the case of health care and financial sectors, or it could be imposed legally through policies such as the European Union General Data Protection Regulation (GDPR). These privacy concerns pose a major hindrance to the wide-scale use of ML as a tool. Consequently, there has been an increased interest in PPML as a potential solution. Widespread adoption of ML tools is also inhibited by the high computational demands of the ML algorithms. PPML, furthermore, makes the already compute-intensive ML algorithms even more demanding. As many everyday users lack the needed computing resources, outsourcing the ML tasks to better equipped, and more powerful servers is a preferred alternative. Towards this, Secure Outsourced Computation (SOC) promises to provide a feasible solution. It allows end-users to securely outsource computation to a set of specialized cloud servers guaranteeing privacy of the end-user’s data while tolerating reasonable collusion amongst the servers.

Realisation of PPML in an SOC setting can be done by relying on techniques from Secure Multiparty Computation (MPC), which allows n mutually distrusting parties to perform computations collaboratively on their private inputs, so that an adversary controlling at most t parties, cannot learn any information beyond what is already allowed by the output of the computation. MPC for a small number of parties in the honest majority, specifically the setting of three (3PC) [5, 11, 2, 1, 10, 7, 3] and four parties (4PC) [4, 6, 8] has gained much attention, as it allows highly efficient constructions that use only light-weight primitives. The setting of honest majority allows us to achieve strong security guarantee of Guaranteed Output Delivery (GOD) (all parties obtain the output irrespective of adversary’s behaviour). Robustness or GOD becomes extremely crucial for real-world deployment and usage of PPML techniques. A protocol with weaker security than GOD can lead to denial of service (because of the actions of the adversary), and can result in heavy economic losses for the service provider and decreased participation from the data providers. Hence, for the seamless adoption of PPML solutions in the real-world, robustness of the protocol is of
utmost importance.

Nirabhra Mandal, M.Tech Fellow

In this project, we consider a population composed of a continuum of agents that seek to maximize a payoff function by moving on a network. The nodes in the network may represent physical locations or they may be choices in a more abstract sense. The proposed framework finds applications in a variety of problems where the evolution of a population’s choices and nudging the behavior at a population level, over a period of time, to more pro-social choices is of interest. Examples include  fleet distribution of ride sharing services like Ola and Uber; transportation mode choices of a population; opinion dynamics; human and insect swarm migrations etc.

Publications:

  1. Evolution of a Population of Selfi sh Agents on a Network.                                                     N. Mandal and  P. Tallapragada
  2. Dynamics of a Population of Optimum Seeking Agents on a Network                                     N. Mandal and  P. Tallapragada

Prasanna Patil, M.Tech Fellow

Hypergraphs are higher-order graphs where a relation (a hyperedge or hyperlink) exists
between a set of entities as opposed to a Graph where a relation (an edge) can exist only
between a pair of entities. Hyperedge prediction is the task of prediction missing hyperedges or future hyperedges given the knowledge of existing hyperedges.

We are rst to explore the impact of negative sampling techniques [1] used for the hyperedge
prediction task. In addition, we also propose novel negative sampling techniques which
improve upon techniques widely used in literature. We further propose a clique-closure hypothesis (CCH) [2] for formation of hyperedges in a hypergraph. Our proposed C3MM algorithm, which embeds CCH hypothesis, out performs existing methods on the hyperedge prediction task. We also establish a sub-higher order (SHO) paradigm for hyperedge evolution and propose a novel neural network architecture, named SHONeN, which shows that SHO paradigm is better at predicting future hyperedges than existing 2O and HO paradigms.

Publications:

  1. Negative sampling for hyperlink prediction in networks.                                                         Prasanna Patil, Govind Sharma, and M Narasimha Murty.
  2. C3MM: Clique-closure based hyperlink prediction.                                                                 Govind Sharma, Prasanna Patil, and M Narasimha Murty.

Saraswathy RM, M.Tech Fellow

A distributed storage system is a collection of stand-alone servers that have the data distributed over them. Due to the scaling and reliability it provides, distributed storage systems are used for data storage across various applications including financial services, e-commerce, media, military applications, etc. System parameters for most distributed storage systems are chosen based on intuition rather than sound reasoning. This could lead to delay in serving a customer and hence loss of revenue. Hence, an analysis of the latency associated with read and write requests to the system would help in designing an optimal system. We study a distributed storage system with Master-Slave architecture, where the data is first written to a master server before being copied onto the slave servers in the system. In such a system, read latency decreases with redundancy as it can be served by any server, whereas write latency increases with redundancy, due to the increasing number of servers over which data needs to be written. This leads to a natural tradeoff between latency and redundancy in the system.

In our work, we model the read and write request arrival process as Poisson with different rates. The arriving read and write requests to any server in the system is served exponentially at a rate that depends on the type of request. A server serves read or write request existing in its queue based on the priority setting of the system. In a read priority system, we assume that an incoming read request preempts any write request in service. Similarly, write request preempts read request in service in write priority system. When a write request receives service from master server, it joins all the slave servers at the same time, and exits the system on completion of service from them, thus forming a fork-join queue. Our objective is to characterise the optimal redundancy of the system by analysing the read and write queues in the system. In this regard, we approximate the fork-join queue as a system of uncoupled tandem queues with independent service rates. Since mean latency is proportional to the mean number of requests in the system, optimal redundancy is obtained when mean number of requests is minimum. Using the approximation for fork-join queue, we express optimal redundancy as a function of read and write load.