Currently, for development data, I have taken imagery for two 100-piece puzzles. They are in ~nelson/pics/puzzles/wizard_of_oz2, and ~nelson/pics/puzzles/duck_bunny2. Each directory contains 50 images, each containing 2 puzzle pieces on black velvet.
The directories without the "2" (~nelson/pics/puzzles/wizard_of_oz, and ~nelson/pics/puzzles/duck_bunny) are an earlier data set. They contain the same pieces, but 6 to an image. These pictures have some interlace jitter, the white balance was not locked, and the pieces are smaller. It is likely more difficult to solve the problem from this set, but you are welcome to try. As a bonus, this set contains two versions of each image numbered consecutively, taken at different exposures. The odd-numbered images are brighter than the even numbered ones.
As a culmination, we will, on a class day near the end of the semsester, hold a demonstration session, where the systems will have the opportunity to strut their stuff, both on develpment data and a previously unseen puzzle (of the same general difficulty as the development data). We might even have a race to see which system is the fastest. I will reserve a number of cluster nodes (hopefully representing at least 16 processors) on the designated day on which to run the demonstations, in case anyone parallelizes their system. Each team will also need to give a presentation on their effort (on a class after the competition)
As usual, each team is to create a single, coherent, written report on their approach to the project, describing in detail what approaches were used, what did not work, any comparative studies done, what considerations went into the design, where the processing time was spent, how the system would be expected to scale, etc. etc.
Because of the black background, simple thresholding will (almost) allow the puzzle pixels to be separated from the background. Most of the minor problems here can be fixed up by some simple local filtering (e.g. a 3x3 median filter). A connected components routine can be used to associate pixels belonging to individual pieces.
There are a few complexities you will need to deal with. First, the dynamic range of the camera is not all that might be hoped. If white areas are not saturated, some dark areas look pretty black, so the color information is not as good as is available to a person doing the same puzzle.
Second, thresholded images will not be completely clean. There may be small artifacts in the image (I know there are a few white lines near the lower edge of a few images, where the edge of a piece of paper crept in). These are easily removed by a threshold on the piece size. Boundaries are likely to be ragged. Median filtering (or other smoothing) helps to some extent. Following this by a dilation and and erosion of the thresholded binary image would probably produce pretty clean regions.
Third, you will need to rotate the pieces as well as translate them in order to fit them together. There is code in my libaries for rotating images, but there are pixel-level accuracy issues at the boundaries. The aspect ration of the images is not 1 to 1, and this must be corrected for, or rotated pieces will not match. The directories contain an image of a square object (calibrate_square.tiff) to allow you to determine what the true aspect ratio is.
I have written library code for all the operations mentioned above, including connected components, and many more, so you probably do not need to write or find much low level image processing code. You just need to figure out how to link and use the ipp libraries. There is a program "image_calc" in ~nelson/bin/PCLinux that will let you experiment with all sorts of image processing operations on the images, to see what gives good extraction. You can look at the source in ~nelson/programs/src/integrated/bin to see how to link and use the routines once you have figured out what needs to be done.
For the most part, each piece interlocks with 4 others (except for the border pieces). In the wizard-of-oz puzzle, the pieces are laid out as a modified rectangular grid with 4 pieces coming together at corner junctions. The duck_bunny puzzle is not quite so simple, but there is still a regular underlying grid. The unseen test puzzle has a few additional distortions.
It might be advantageous to identify border pieces, just like human puzzlers often do, and start by assembling the border. Once this is in place, if the puzzle is built from the corners, each piece added is constrained on two sides, which greatly reduces the chance for error and the need for backtracking. To be efficient, the test for a fit probably needs to be constrained by some sort of normalization and/or indexing process so that you don't have to explicitly test each possible translation and rotation. Measurements of the "knobs" and "holes" might be useful for this, as might color classification (both strategies imployed by humans).
A strategy that is NOT allowed is somehow obtain an image of the completed puzzle (e.g. by assembling the puzzle by hand) and then match pieces against this image. This is puzzle solving without the box.