## GridWorld: Dynamic Programming Demo

Cell reward: (select a cell)

### Setup This is a toy environment called **Gridworld** that is often used as a toy model in the Reinforcement Learning literature. In this particular case: - **State space**: GridWorld has 4x5 = 20 distinct states. The start state is the top left cell. The gray cells are walls and cannot be moved to. - **Actions**: The agent can choose from up to 4 actions to move around. In this example - **Environment Dynamics**: GridWorld is deterministic, leading to the same new state given each state and action - **Rewards**: The agent receives +1 reward when it is in the center square (the one that shows R 1.0), and -1 reward in a few states (R -1.0 is shown for these). The state with +1.0 reward is the goal state and resets the agent back to start. In other words, this is a deterministic, finite Markov Decision Process (MDP) and as always the goal is to find an agent policy (shown here by arrows) that maximizes the future discounted reward. **Interface**. The color of the cells (initially all white) shows the current estimate of the Value (discounted reward) of that state, with the current policy. Note that you can select any cell and change its reward with the *Cell reward* slider. ### Dynamic Programming An interested reader should refer to **Richard Sutton's Free Online Book on Reinforcement Learning**, in this particular case [Chapter 4](http://webdocs.cs.ualberta.ca/~sutton/book/ebook/node40.html). Briefly, an agent interacts with the environment based on its policy \$$\pi(a \mid s)\$$. This is a function from states \$$s\$$ to an action \$$a\$$, or more generally to a distribution over the possible actions. After every action, the agent also receives a reward \$$r\$$ from the environment. The interaction between the agent and the environment hence takes the form \$$s\_t, a\_t, r\_t, s\_{t+1}, a\_{t+1}, r\_{t+1}, s\_{t+2}, \ldots \$$, indexed by t (time) and our goal is to find the policy \$$\pi^\*\$$ that maximizes the amount of reward over time. In the DP approach, it is helpful to define a **Value function** \$$V^\pi(s)\$$ that gives the expected reward when starting from the state \$$s\$$ and then interacting with the environment according to \$$\pi\$$: $$V^\pi(s) = E \\{ r\_t + \gamma r\_{t+1} + \gamma^2 r\_{t+2} + \ldots \mid s\_t = s \\}$$ Note that the expectation is over both 1. the agent's (in general) stochastic policy and 2. the environment dynamics; That is, how the environment evolves when the agent takes an action \$$a\$$ in state \$$s\$$ and what the obtained reward is in that case. The constant \$$\gamma\$$ is a discount factor that gives more weight to earlier rewards than the later ones. We can start writing out the expectation. To get the value of state \$$s\$$ we sum over all the actions the agent would take, then over all ways the environment could respond, and so on back and forth. When you do this, you'll find that: $$V^\pi(s) = \sum\_{a} \pi(s,a) \sum\_{s'} \mathcal{P}\_{ss'}^{a} \left[ \mathcal{R}\_{ss'}^{a} + \gamma V^\pi(s') \right]$$ In the above equation \$$\mathcal{P}\_{ss'}^{a} , \mathcal{R}\_{ss'}^{a} \$$ are fixed constants specific to the environment, and give the probability of the next state \$$s'\$$ given that the agent took action \$$a\$$ in state \$$s\$$, and the expected reward for that transition, respectively. This equation is called the **Bellman Equation** and interestingly, it describes a recursive relationship that the value function for a given policy should satisfy. With our goal of finding the optimal policy \$$\pi^\*(s,a)\$$ that gets the most Value from all states, our strategy will be to follow the **Policy Iteration** scheme: We start out with some diffuse initial policy and evaluate the Value function of every state under that policy by turning the Bellman equation into an update. The value function effectively diffuses the rewards backwards through the environment dynamics and the agent's expected actions, as given by its current policy. Once we estimate the values of all states we will update the policy to be greedy with respect the Value function. We then re-estimate the Value of each state, and continue iterating until we converge to the optimal policy (the process can be shown to converge). These two steps can be controlled by the buttons in the interface: - The **Policy Evaluation (one sweep)** button turns the Bellman equation into a synchronous update and performs a single step of Value Function estimation. Intuitively, this update is diffusing the raw Rewards at each state to other nearby states through 1. the the dynamics of the environment and 2. the current policy. - The **Policy Update** button iterates over all states and updates the policy at each state to take the action that leads to the state with the best Value (integrating over the next state distribution of the environment for each action). - The **Run until convergence** button starts a timer that presses the two buttons in turns. In particular, note that Value Iteration doesn't wait for the Value function to be fully estimates, but only a single synchronous sweep of Bellman update is carried out. In full policy iteration there would be many sweeps (until convergence) of backups before the policy is updated. ### Pros and Cons In practice you'll rarely see people use Dynamic Programming to solve Reinforcement Learning problems. There are numerous reasons for this, but the two biggest ones are probably that: - It's not obvious how one can extend this to continuous actions and states - To calculate these updates one must have access to the environment model \$$P\_{ss'}^a\$$, which is in practice almost never available. The best we can usually hope for is to get samples from this distribution by having an agent interacting with the environment and collecting experience, which we can use to approximately evaluate the expectations by sampling. More on this in TD methods! However, the nice parts are that: - Everything is mathematically exact, expressible and analyzable. - If your problem is relatively small (maybe a few thousand states and up to few tens/hundreds actions), DP methods might be the best solution for you because they are the most stable, safest and come with simplest convergence guarantees.