Turn in your code, README, annotated output transcripts, and PDF writeup to Blackboard. This time your emphasis should be on the writeup, which is the product of exploiting your code to answer some scientific question. It can and should, however, also explain your code, explain your demonstration input-output examples, recount your adventures, mis-adventures, novel ideas and achievements, etc. Graphics are always good.
You should find Racket on the UR UG network somewhere under
/usr/staff. On the faculty side the executable is:
/usr/staff/drracket/plt/bin/, so that's a good bet.
Let us know if there are any problems or surprises.
First, read and understand the Constraints and N-Queens tutorial.
At least implement backtracking and min-conflicts: consider the MRV to be extra credit (but it is easy given you've already written the search program). You'll need to instrument them so that they not only report success but report how hard they worked (either 'consistency checks', comparisons, or if you want to ignore how hard it is to figure out what to do, the number of 'operations' (queen-plunking or queen-moving).
What we always do with an implementation is exercise it (vary the problem size, algorithm parameters if any, design choices or algorithmic variations, even algorithm implementations (e.g. vectors vs. lists), etc.) and report the results. The main question here is: How does the work needed grow with problem size? Since there is some randomness (not much for backtracking maybe, but you could randomize the fixed column order --- for min-conflicts there's the starting state and maybe how you break ties) you should do what the text does, which is to take a statistic (like the median) over a number of runs (they did five). All you have to do is keep the results in a log and you can also produce other simple statistics (besides median) like minimum, maximum, average, standard deviation. Other resources like clock time or CPU time are also easy to get in Scheme, and could be interesting.
AIMA claims (middle of page 150) that if you leave out the time of initially placing queens, runtime for min-conflicts on n-Queens is roughly independent of n, that is of the problem's size (!), and that the 1000000-queens problem can be solved in 50 steps (!!). How much of this can you verify?
For Backtracking, there are also design choices and algorithmic variations. E.g. am I right about middle-out order (e.g 34251607 for 8 queens) being better than left to right (e.g. column order 01234567), and presumably outside-in should be worse (e.g. 07162534)? What about the initialization and variable-choosing alternative in the minimum-conflicts algorithm -- do they make a difference?
For Min-conflicts, some design choices you can easily tweak to see if they make a
difference:
1. Either initialize the board with a random placement of queens or use a
'greedy'
process that chooses a minimal-conflict value for each variable in
turn.
2. Instead of setting col to a random column with conflicts, maybe choose the
column with the queen causing the most conflicts? You could compute
this each iteration with no change in the state representation. I've no
clue if this is a good idea!
Of course we want the usual PDF scientific-style report (see the writing helpers) describing methods, results, discussion and analysis, references, appendices with code snippets, transcripts, bulky stuff. Graphics leading to your conclusions on complexity of methods are expected too... If you are showing some solutions (why not -- you worked hard for them), print them in a nice human-readable square display, maybe with . for empty and Q for queen.
Last update: 11/1/11