[Size] rows = 3 cols = 4 [Transition model] "(*,*), FORWARD" = 0.8 "(*,*), LEFT" = 0.1 "(*,*), RIGHT" = 0.1 #"(1,1),UP,(2,1)" = 0.8 #"(1,1),UP,(1,1)" = 0.1 #"(1,1),UP,(1,2)" = 0.1 [Rewards] (*,*) = -0.04 (3,4) = 1 (2,4) = -1 [Holes] 1 = (2,2) [Discount factor] gamma =1 [Terminal states] 1 = (3,4) 2 = (2,4)The above text describes the 3x4 world in the textbook (Fig 17.1, Page 614). The file is of a Windows INI format, which is easier to read and edit than an xml file. Each section, whose name is enclosed in brackets, contains a list of key=value pairs. A line starting with "#" is comment and is ignored. The section names are case sensitive. If you have only an APPLE ][, which you only type upper case letters on, take a look at MDPFileParser.java and make due changes. The coordinates of each cell are row number and column number, in that order. Both row and column numbers start from 1. This is different from the textbook, which uses an X-Y notation. Some details of each section are listed below.
Note how the the clever "agent-centric" coordinates shorten the description: Basically the text above understands whichever direction the agent is (implicitly) commanded to move as the "forward" direction. Say the agent is at (2,2) and tries to go to (3,2), we would have a direction already (3-2, 2-2)=(1,0) or east-bound. The FORWARD/LEFT/RIGHT triple are relative to this direction. So in this description the absolute direction comes from the command and the relative (FORWARD, LEFT, RIGHT) directions are derived from the commanded and the current locations.
(1,1),UP,(5,6)=0.0000008for every s, a and s'. That's 11x4x11 entries to write even for our miniature 3x4 world. Quite some typing exercise. Fortunately, for all the examples in Chapter 17, each cell has the same transition function and we can use an agent-centric representation, hence the 3 entries you saw in the above file.
There is nothing that prevents you from having a more sophisticated transition model. In fact, you may need one in our Quake world. Then you need to fall back to the full model. Just to show you how to write entries of a full model, in the above file there are three commented out lines depicting the transition function at cell (1,1).
The two notations can't be used at the same time. You need to put quotes around the key if you have white spaces in the key.
The class MarkovDecisionProcess along with a few other smaller helper classes define a basic MDP. The code is written with a 2D world in mind, though it's not difficult to factor out an interface to describe a more general MDP, which can be inherited to form a 2D or 3D MDP.
Pay attention to the argument in the text that shows why value iteration is guaranteed to converge. As a good exercise to see if you are comfortable our implementation, try to replicate Figure 17.5.
Policy iteration is essential for some reinforcement learning methods in Chapter 21. The two components of policy iteration are policy evaluation and policy improvement. The key to the policy evaluation algorithm is to assemble the NxN linear system. N is the number of cells of the environment. If we have an nxn world, the linear system is of O(n^4), not a small number at all. However, the linear system is very sparse. All the non-zero elements are along the main diagonal. There are efficient methods to store and solve sparse linear systems but they are out of the scope of this class. The algorithm in Figure 17.7 is slightly wrong: the initial policy cannot be just random. It has to be a random proper policy. Or else, if the discount factor is 1, the solution matrix could be singular. Try the following simple example
LEFT | TERMINATE |
LEFT | DOWN |
import cs.decision.*; import java.io.FileNotFoundException; public class ValueIterationTest { public static void main(String args[]) { try { MDPFileParser parser = new MDPFileParser("textbook.txt"); MarkovDecisionProcess mdp = parser.parse(); ValueIteration vi = new ValueIteration(mdp); vi.setError(1e-4); vi.solve(); mdp.dumpHTML(false); } catch (FileNotFoundException e){ System.out.println(e); } } }
import cs.decision.*; import java.io.FileNotFoundException; public class PolicyIterationTest { MarkovDecisionProcess mdp; public static void main(String[] args) { try { MDPFileParser parser = new MDPFileParser("textbook.txt"); MarkovDecisionProcess mdp = parser.parse(); PolicyIteration pi = new PolicyIteration(mdp); pi.solve(); mdp.dumpHTML(false); } catch (FileNotFoundException e){ System.out.println(e); } } }Both programs create the following result.
0.8115582191780821:RIGHT | 0.8678082191780823:RIGHT | 0.9178082191780822:RIGHT | 1.0 |
0.7615582191780821:UP | N | 0.6602739726027398:UP | -1.0 |
0.7053082191780822:UP | 0.6553082191780824:LEFT | 0.6114155251141554:LEFT | 0.38792491121258266:LEFT |
Don't forget to put mdp.jar in your CLASSPATH when compiling and running the programs. We use JAMA to solve linear systems. The JAMA jar file is in the source directory too.
The following example shows how to set up a simulator to test your learning algorithm. ADPSimualtor.java employs two MDPs, one of which we know and the other the agent will learn.
import java.io.FileNotFoundException; import cs.decision.*; import cs.learning.*; /** * Simulator for testing an ADP agent. */ public class ADPSimulator { MarkovDecisionProcess modelMDP; MarkovDecisionProcess learnedMDP; ADPAgent agent; public static void main(String[] args) throws FileNotFoundException { ADPSimulator simulator; if(args.length >= 1) simulator = new ADPSimulator(args[0]); else simulator = new ADPSimulator("textbook.txt"); simulator.demo(); } public ADPSimulator(String filename) throws FileNotFoundException{ MDPFileParser parse = new MDPFileParser(filename); modelMDP = parse.parse(); // Create the MDP to be learned learnedMDP = modelMDP.copyLayout(); agent = new ADPAgent(learnedMDP); } public void demo() { // Generate a policy for the learned MDP //learnedMDP.generateProperPolicy(); // This is the optimal policy. learnedMDP.setAction(1,1, 0); learnedMDP.setAction(2,1, 0); learnedMDP.setAction(3,1, 1); learnedMDP.setAction(1,2, 3); learnedMDP.setAction(3,2, 1); learnedMDP.setAction(1,3, 3); learnedMDP.setAction(2,3, 0); learnedMDP.setAction(3,3, 1); learnedMDP.setAction(1,4, 3); //Mark every state new for(State s=learnedMDP.getStartState(); s!=null; s=learnedMDP.getNextState()) s.setVisited(false); learnedMDP.dumpHTML(true); run(100); learnedMDP.dumpTransitionModel(); } public void run(int numTrials) { Percept percept = new Percept(null, 0.0); State s = learnedMDP.getStartState(); State modelState; for(int trials=0; trials < numTrials; trials++) { learnedMDP.dumpHTML(false); percept.state = s; modelState = modelMDP.getCoincideState(s); percept.reward = modelMDP.getReward(modelState); // The agent decides what to do next Action a = agent.go(percept); if(a == null) { s = learnedMDP.getRandomReachableState(); }else { // Given the state and the action, the simulator // determines the next state using the transition model modelState = modelMDP.transit(modelState, a); s = learnedMDP.getCoincideState(modelState); } } } }