[MINI] Markov Decision Processes

Published: Jan. 26, 2018, 4 p.m.

b'

Formally, an MDP is defined as the tuple containing states, actions, the transition function, and the reward function. This podcast examines each of these and presents them in the context of simple examples.\\xa0 Despite MDPs suffering from the curse of dimensionality, they\'re a useful formalism and a basic concept we will expand on in future episodes.

'