Markov Chain Variance Estimation: A Stochastic Approximation Approach

# 206

Abstract

We consider the problem of estimating the asymptotic variance of a function defined on a Markov chain, an important step for statistical inference of the stationary mean. We design the first recursive estimator that requires O(1) computation at each step, does not require storing any historical samples or any prior knowledge of run-length, and has optimal O(1/n) rate of convergence for the mean-squared error (MSE) with provable finite sample guarantees. Here, n refers to the total number of samples generated. The previously best-known rate of convergence in MSE was O(log n/n), achieved by jackknifed estimators, which also do not enjoy these other desirable properties. Our estimator is based on linear stochastic approximation of an equivalent formulation of the asymptotic variance in terms of the solution of the Poisson equation. We generalize our estimator in several directions, including estimating the covariance matrix for vector-valued functions, estimating the stationary variance of a Markov chain, and approximately estimating the asymptotic variance in settings where the state space of the underlying Markov chain is large. We also show applications of our estimator in average reward reinforcement learning (RL), where we work with asymptotic variance as a risk measure to model safety-critical applications. We design a temporal-difference type algorithm tailored for policy evaluation in this context. We consider both the tabular and linear function approximation settings. Our work paves the way for developing actor-critic style algorithms for variance-constrained RL.

Shubhada Agrawal is a Postdoctoral researcher in the Department of Statistics and Data Science at CMU. Before this, she was a postdoctoral fellow in ISyE at Georgia Tech. She completed her PhD in Computer and Systems Science from TIFR, Mumbai, and her undergraduate degree in Mathematics and Computing from IIT Delhi. Her research interests lie broadly in applied probability and sequential decision-making under uncertainty.