Institute of Information Theory and Automation

Publication details

Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion

Journal Article

Cavazos-Cadena R., Montes-de-Oca R., Sladký Karel


serial: Journal of Applied Probability vol.52, 2 (2015), p. 419-440

project(s): 171396, GA AV ČR

keywords: Dominated Convergence theorem for the expected average criterion, Discrepancy function, Kolmogorov inequality, Innovations, Strong sample-path optimality

preview: Download

abstract (eng):

This work concerns discrete-time Markov decision chains with denumerable state and compact action sets. Besides standard continuity requirements, the main assumption on the model is that it admits a Lyapunov function m. In this context the average reward criterion is analyzed from the sample-path point of view. The main conclusion is that, if the expected average reward associated to m^2 is finite under any policy, then a stationary policy obtained from the optimality equation in the standard way is sample-path average optimal in a strong sense.

RIV: BC

Responsible for information: admin
Last modification: 21.12.2012
Institute of Information Theory and Automation