We consider discounted Markov decision processes (MDPs) with countably-infinite state spaces, finite action spaces, and unbounded rewards. Typical examples of such MDPs are inventory management and ...
This is a preview. Log in through your library . Abstract In this paper, a modification of the bisection simplex method is made for more general purpose use. Organized in an alternative simpler form, ...