State-Space Problem Solving

Overview

  1. Read R&N Chapters 3 and 4.
  2. Find a teammate and work in teams of two on this. Idea is to help each other with the Quagent environment and concepts, ditto the state-space searching, split up the easy work, save yourselves time, have more fun, and learn more.
  3. Write a state-space search program to find paths through mazes represented by maps. Implement A* search and at least two more strategies. You can choose, for instance, from depth-first, breadth-first, iterative deepening, and bidirectional search.
  4. Use the path generated by your problem solver to instruct a Quagent bot to navigate a "real" version of the maze.
  5. Extra Credit (and Fun!): use on-line problem-solving to have your Quagent perform the search itself without a map (generating a partial map and a correct path through the maze). OR invent a strategy for team search.
  6. The usual scientific writeup and code submission.

Mazes and Search

Problem solving search involves a problem representation and a "search engine", which can be simple like a DFS or BFS 3-liner tree traversal (plus some bookkeeping code), or more complex like A*. Ideally, the search and the representation are decoupled so you can use the same searcher on different domains with no changes. So you should be able to use your searcher on the domains like jugs problems, missionary and cannibals, Loyd's puzzles, etc. referred to in last year's General Problem Solver assignment. This year we only have one domain, but good software practice dictates that you should keep your state-space searchers independent of the state-space representations.

We are going to find our way through maze maps and mazes. You get to make up the mazes. You may recall from Nancy Drew mysteries that you can find your way through a maze by keeping your right hand on the wall and walking forward. Take 30 seconds and prove to yourself that you can easily construct a maze that breaks that strategy.

Just a reminder that you don't generate an explicit search tree -- that tree is the map of your program exploring the search space graph. At each step of the search you're going to generate all the successors to the current state, and then check the resulting state(s) to see if you're at the goal state, and if not try another operation. Which operations you try in which order is your search strategy. You are to try at least three strategies (A* and two others).

Important Hint! Ben Van Durme, stalwart TA, has worked through this assignment he observes that the pseudocode in the book for a general state-space search algorithm (p. 83) is nice and general in an ideal world but not practical. He recommends that you maintain the separation of the search domain from the search strategy (so the same search algorithm can be used on different problems) but do NOT try to separate the search strategy from some general search algorithm. Just write separate DFS, BFS, A*, etc search algorithms.

Map, Start State, Goal State Input Format

Maze maps, start and goal states have the following format: for the map, the first two numbers give row and column size: bot can't wander outside the maze walls. In the array, 2 is the start of the maze, 3 is location of goal, 1 is wall and 0 is corridor. Map coordinates are (R,C) where R is row number (0 through row-size - 1) and C is similar col. number. The map is followed by the start and end states.

The start state follows the map on its own line. It is (Rs, Cs, D) (no parens) where Rs, Cs is the starting position and D is the direction that points into the maze. The end state is next on its own line. It is (Rg, Cg, 0), again no parens, where Rg, Cg are the goal point's coordinates and the third number (D) must be 0 , so the bot must be pointing North at the goal. So here's a sample:

6 7
1211111
1001011
1100103
1010101
1010001
1111111
0 1 2
2 6 0

States, Operators, and Memory

The state of the bot can be described by (R,C,D) where R and C are grid coordinates (same as R and C on the map) for the bot and D is the direction it's facing: one of {0,1,2,3}, corresponding with the directions {North, East, South, West}.

The operators are "turn right (in place)", "turn left (in place)", and "move forward one distance unit". Each turn costs 1 cost unit, and the move operation costs two cost units.

Successor generation involves some "sensing", namely on the contents of the cell to bot's left, right, and straight ahead. Each empty cell represents a possible successor (place to explore next.) You probably want to "sense" whether a cell ahead has been explored and if so not go there. This you can do with a map you update...see below.

For the purposes of cycle detection you are going to want to remember if you have seen this state before while exploring. If so you've found a cycle so don't expand search further at that point. I find hash tables are good in general but in this case you can use a (rows by columns by 4) array storing a 1 if that state is explored. Matching is easy then.

The start state should put bot at the start of the maze facing in the proper direction to enter. The goal state is a particular (R,C,0) location, with the facing direction pegged to 0 (North). The quagent controller must use the goal state to implement some of the "informed" searches, like A*, as well as bidirectional search.

Having found the goal you must extract and output the sequence of operations that achieved it.

Quagent Controller

The extracted sequence of operations now must be translated into quagent protocol commands and sent to the bot so it can "physically" walk through the maze. In an ideal world an "open loop" string of commands of the form (face N, go D, go D, go D, turn left, go D, go D, turn right, go D....) should work (D being the grid distance). It may not be so simple, however. Things might go wrong like D not being executed exactly. Walls might be bumped into, etc. So this simple translation could involve some sensing and error-recovery.

Mazes and Maze Maps

You will need to convert your maze map into a real environment (level) in the Quagent world. We'll be giving you at least one "real" maze with associated map. Maybe we'll test your program on another maze-map pair, maybe we'll have a competition. Not sure yet.

You should be able to create real mazes with a level editor, as described in the Quagent TR 853 in DSpace, or also the Quagent TR 853 in Local Space.

BUT thanks to Mike Rotondo and David Sloan, supervised by Prof. Pawlicki, there may be a Better Way for Windows users. The better way will generate mazes of a given size automatically for you! In 2006 several students developed ways to improve the interface between the controller and the maze generator: in fact to take the maze-generator output and convert it into input for the maze-solver. Here is a short Maze Description Converter that produces the ascii input in the format demanded by this assignment.

Instrumentation, Experiments and Experiences

You can run comparisons between the strategies. For searching, the bovious ones are: for the same maze, how many nodes are expanded (searched) and the cost of the resulting path. Thus you should "instrument" your code to compute these values and keep statistics, and to output them in a useful form for you when you write your report (a file readable by Matlab for graphing is a good example). For grading and examination by the TA, your program should have a "grade mode", in which its only output is in the form described below in the "What to turn in" section. Describe your experiences in translating your path to the real quagent world -- with luck this will be short and sweet, but I have a feeling... Another natural thing to vary is the maze types or characteristics. You can vary the heuristic functions for your A* search also.

Extra Credit and Fun

More Sophisticated Search

If you didn't do bidirectional search, try it!

Team search? It might be fun to imagine a team of quagents and simulate strategies offlien. One idea might be to spawn quagents at decision points, one to go in each direction: gives you something like nondeterministic DFS. One can't do spawn forever, since there is a limit to the number of bots, so at some point each agent must stop duplicating itself and go it alone. What to do when they meet? And how do they sew the final path together from the experiences of the individuals?

Or you could imagine parachuting agents into the maze at either known or unknown locations and turning them loose to search. If one finds the goal, how do you take what you've learned from them all (say you're back at HQ and can get detailed reports of paths covered by all agents) and either get an entire path to the goal or re-task agents to close the gaps in your knowledge.

All this teamwork is interesting, challenging, and topical. Not for the faint of heart. There must be references on distributed mapping and searching algorithms. Would be a terrific term project that would use what you've built for this one.

Going Online

State-space problem solving is not terribly well adapted to controlling physical agents. R&N 4.5 is the relevant section here. If you're an actual physical agent you can't teleport between states as you do with A* or BFS, say. You have to backtrack physically. There are online search algorithms: implement one or more and describe how they work. This could be fun as you see your quagent purposefully exploring (or bashing around at random). The Quagent should output the sequence of operations it finds that solves the maze. Building a map of the maze as it goes seems like a good idea too.

Some online strategies you can try (1 and 3 are from Section 4.5).

  1. Online DFS.
  2. Online Iterative Deepening (p. 78). These first two are straightforward and the output might be a partial map of explored space and the path to the goal that was found.
  3. LRTA*. Looks interesting. May need another reference, or maybe treatment on p. 128 is enough.
  4. Random walk. (Kinda boring).
There could well be a pretty interesting problem with cycles. Usually a repeated state in a search means you're in a cycle. Here, the online bot has to backtrack literally over old paths to re-do choices. I'm thinking it'll be best to keep either a map, which you can use to report your results, or a list of completely explored states. Then if you know you're backtracking, ignore the issue but if you are in the process of active forward searching and find an explored state you're in a cycle.

What To Turn In

Send CB a (strictly private and confidential) review of your teammate's performance if you have anything dramatic to report. That's brown@cs.

One Team Member: Submit on BB your code, a README that explains it, and a nice writeup in good technical prose, explaining what you did and how, and detailing the results of any experiments or comparisons you did using your code. The writeup must be in PDF. All Other Team Members: Submit to WebCT a simple text file giving name of the submitting partner (above). You need to submit something to be graded!

The writeup could have maps showing shortest paths, tables or graphs showing the comparisons between your methods, etc. Strive for a professional look. Remember the helpers and the writing center. Upload to BB as usual.

Here are some project Computing Resource and Project Grading Guidelines we all will be following.

Your code should take as input a map, start state (here 0,1,2 for "facing south in square row 0, col 1) and end state (here 2 6 0 for the goal grid square, facing north.) The program should be set to "grade mode", in which its only output is: a copy of the input (the map, start state, goal state) and the sequence of operators to solve the maze, in the form of a CRLF-delimited sequence of "Move", "Left", and "Right" commands. Thus the output for the map above would start like:

6 7
1211111
1001011
1100103
1010101
1010001
1111111
0 1 2
2 6 0
Move
Left
Move
Right
Move
...

It is important to observe this output format strictly since your program will be tested by another program.

The code you use to initialize your Quake world and command your Quagent, including interfacing the planner output to it, is naturally of interest, so turn that in too. There are some attendent technical issues that should appear in your writeup, including distance unit conversion, inexact WALKBY distances performed by bots, etc.

There are examples of project writeups for this course at 242's main assignment page and there are explicit writing helper documents on The writing helper page .

[ csc242 Home ]