CSC242: Kinematics and Texture

Overview

In this individual project you (singular) will use Matlab to simulate a robot arm and you get to think about the inverse kinematics problem. Also, you will use some image processing and pattern recognition techniques to classify texture in images.

Matlab

You should use Matlab for this exercise: There are books in the bookstore: I think it's Introduction to Matlab for Scientists and Engineers by Dolores Etter. Here's a link to Amazon.com's page on it: MatLab Text .

Also there are reference books in the UG laboratory. One set is obvious a tutorial "getting started" set, the other is obviously a "reference" set, and soon to arrive is a reference for the image processing tooklit. Don't lose them. Of course feel free to help each other learn MatLab, share neat features you discover, etc.

ALSO there is on-line documentation: go to URL www.mathworks.com and follow the matlab link (to www.mathworks.com/products/matlab/ and go down and follow the Documentation link and there you are!

Multi-Link Arm

I'd like you to model a 3- or more- link arm. We'll talk strictly about revolute (rotational) joints below, but if you want to model prismatic (sliding) joints that's easy too. You can do what you like here, but the easiest sort of linkage to visualize is planar. If you want to model a 3-D arm, that would be fun too. The forward kinematics problem is: given the joint angles (three of them for a three-link arm), where is the endpoint of the last link (where a gripper would be). In Fig. 25.5, if you number the joints from 1 at the base to 6 at the tip, a 3-link planar arm would just use 2,3, and 5. A basic 3-link 3-D arm would use 1,2,3. Although you can assume that each link is a 1-dimensional line, and the next link rotates about the previous link's endpoint.

It's pretty easy, given the rotation angles, to compute where the endpoint is. It is simply the cascaded coordinate transformation induced by the length of the links and the angles of rotation. So e.g. say the link lengths are L1, L2, L3, where L1 is fixed to the world, L2 is in the middle, connected to L1 and L3, and L3 is the "final" link, only connected to L2. Say there are three rotary joints that allow these links to rotate through 360 degrees. Call the rotations R1, R2, and R3. Say R1, R2, and R3 are 0 when L1, L2, and L3 are laid out along the X axis. Positive rotations move the joints counterclockwise. Then to compute where the endpoint of L3 is: start at (0,0), and push the endpoint out along the X-axis by L3. Then rotate that endpoint around the origin by R3. Then push the result out along the X-axis by L2 and rotate by R2. Repeat with L1 and R1, and you're done.

Represent points by 2- (or 3-) dimensional column vectors (matrices, in Matlab), with the X coordinate in the first row and the Y-coordinate in the second. To rotate a point represented this way around T degrees, multiply its column vector on the left by the matrix

  cos(T) -sin(T)
  sin(T) cos(T)
  
If you have a little .m type function that constructs a rotation matrix for T degrees (actually you should use radians, but whatever...) called Rot(T), then (if I'm thinking correctly), the forward kinematics for our 3-link arm is
  X = Rot(R1)* (L1  + Rot(R2) *  ( L2  + Rot(R3)*( L3))) ,
  Y              0                  0               0
  
where * is matrix multiplication, + is vector (matrix) addition, and the clumsy stacked Li's and 0's are column vectors. For 3-D you need to be able to rotate around a different axis, which means you go to 3x3 matrices. See any relevant book or me.

The output of this expression is the X, Y vector giving the (x,y) location of the end of the robot arm. You will probably also want to keep track of the intermediate joints of the arm so that you can use the Matlab graphing facilities to draw the links instead of just the tip position. You should see how to use subsets of the above equation to compute the positions of the end of the first and second links.

What is NOT so easy is the inverse problem: given (x,y), what are R1, R2, and R3? This is clearly a very useful problem, since it says: I want to reach to this spot in space, how do I set my joint angles to get there. You and I do this sort of thing all the time! There can be a single answer, several answers, or no answer in general.

For your assignment, you should implement a kinematic robot arm simulation, demonstrate that it works to your satisfaction, and then explore such things as its working volume (or area). The input to your simulation might be a "moveto(t1, t2, t3)" sort of command, which just sets the three joint angles to the indicated values. Your kinematics-only simulation will instantaneously go to the required coniguration. Any assumptions about joint angle limitations are extra, if you want to do that. A "moveby(dt1, dt2, dt3)" command is sometimes fun (good for commanding trajectories)...it increments joint angles rather than resets them. Not much intellectual difference. Your matlab output can be just the position of the end effector (if you are interested in graphically illustrating the working volume, say); note that you can plot lots of points at once or sequentially plot things to make a movie. It is also fun to draw the arm, which amounts to plotting the positions of the joints and joining them with lines. This way you can make a movie of your arm reacting to commands. You are to think about how you might solve the inverse kinematic problem. You might first experiment with different combinations of link lengths to see how you can easily create arms that can't reach every point in their working volume (long first link and very short 2nd and 3rd, for instance). You might use Matlab to draw some pictures showing the arm moving around.. maybe the 1st joint moves slowly, the 2nd moves a little faster, the 3rd faster? You can get crude animation just by plotting the results as you generate them. Save some plots for your writeup.

Back to inverse kinematics: can you solve the equations? If so how, if not why not? (See any robotics book). Could you learn the answer? If so how, and why not implement it? What occurs to me is just a look up table, possibly with interpolation. You can systematically set known values of the three joint angles and remember the resulting position. Then you have to "invert" this mapping so you can find the angles given the position you want. You would probably want to interpolate between the closest positions if you don't want one you previously memorized exactly. The problem is interesting because there may be multiple solutions, the interpolation problem is not linear.

If you want to go a little farther, you might explore "trajectory generation", in which you want a sequence of joint angle settings that take you through a desired trajectory (just a straight line is plenty hard enough). In this problem you can run across "singularities", where you can't continuously get from one point to the next, even if they are very close, without lots of joint motion.

In any event, please give me your best idea about how to solve the inverse kinematic problem (feel free to consult any robotics book you can find). The more detailed and the more implemented your ideas, the better. Trajectories are a natural extension that add lots of interest.

Neural Nets

You might decide to construct a "neural" net to learn this inverse kinematic mapping. I recommend implementing the algorithm of Fig. 19.14 with sigmoidal activation functions. NNs are very naturally expressed in arrays, so this is a natural application. Your code should be general as regards number of input, output, and hidden units, and their connectivity, and also learning rate (alpha in R&N). You may even want more than one layer of hidden units. For the inverse kinematics problem, I figure two inputs (x,y) , 3 outputs (R1, R2, R3). In training, you supply an (x,y) and the net comes out with an R1, R2, R3 (presumably random at first). You use R's to construct the true (x, y), and you thus have an error with which to do back propagation. This is risky; I'm not sure there's enough structure to have the net converge.

Texture Classification

The idea here is that you will obtain images, perhaps off the net, perhaps locally, and you will apply feature detectors to them that might be indicative of different sorts of texture. We will supply you with several ideas and you can of course make up your own. The output feature values will be collected into a "feature vector", and you will perform "supervised learning", during which your features ideally will cluster together in clumps in feature space, one cluster for each kind of texture. Then, given an unknown texture, you extract its features and see what cluster the resulting feature falls in, and the hope is that you can thus identify the unknown input.

Texture Images

There are lots of sources of texture inputs. One is on the csug network at /u/brown/images/*.pgm. Use xv to convert format if need be. There are several classes there, but view the images first since often the images do not "look alike" for the same class.

Thanks to Amit Singhal for pointers to other useful image archives:

MIT's VisionTexture database courtesy Roz Picard. It has 100+ homogenous texture images and a large number of real-world images with multiple texture data. Very comprehensive intructions on downloading and using.

CMU's library of image databases page . They have links to a ton of databases for all sorts of things here. Not sure what still works and what doesn't but you should find what you are looking for here...

ECVNet's image database page . Similar to that of CMU's above but has some different image sets.

All of the above are non-copyrighted public domain image databases and should be easily downloadable.

Again, do some experiments and see how well you can classify textures.

Texture Features

Texture features are what distinguish one texture from the other. Thus it is a good idea to look at the data to see what might make sense (in greyscale images, color features do not help, for instance). A good start to generating texture features is to apply a filter to the image (see p. 732 in R&N, p. 41-43 in the Matlab Image Processing Toolbox manual or class notes). You don't need to use the IP toolbox for this assignment except to load images, but it might help.

I recommend initially just finding edges at several resolutions. Use filters like the following for vertical differences.

  -1 1 , 
  
  -1 1
  -1 1 ,
  
  -1  0 1
  -2  0 2
  -1  0 1 ,
  
  
Also their horizontal equivalents, and then combine the dx and dy outputs to get edge strength sqrt(dx^2 + dy^2) and orientation atan2(dy, dx).

Given edge data, you can compute features like "edge density", or how much "edge-ness" (total strength) there is in a patch (16x16 or 32x32, say) at different resolutions.

Also you can pay attention to the orientation information, perhaps by having features that encode the average orientation of edges within a patch ... maybe four features for "N-S", "NE-SW", "E-W" and "NW-SE" directionality, each at different resolutions.

Another simple but useful feature may be the dynamic range (max - min) of grey level within a patch, or the variance of grey level within a patch (that is, the squared difference between the pixel values and their mean value).

Use other filters as they occur to you. You can make a "spot" detector for instance.

  
    -1   -4   -1
    -4   20   -4 
    -1   -4   -1  .
  

Think of filters as little patterns and you can design your own useful ones...

There are many texture features in the literature. You can make them up yourself, find them yourself in computer vision books, or ask CB.

For starters, I would pick a well-behaved pair of images one with lots of edges, one without, use two features only, and plot them and display them so you can see whether you are getting the clustering you want. Generally , you will represent the output of your feature detectors as a N-long vector (matrix). You may want to keep the dynamic ranges of all your features about the same. If one runs from 0 to 1 and another from -1000 to 1000, the latter will dominate distance calculations.

Clustering

The two simplest things to do are: A) represent the cluster by a single vector (its average vector, or center of mass), and B) represent the cluster by all the vectors in it.

In the first case, for each supervised trial, update the mean value of the relevant average vector. There's a cute little formula for this you can work out (or you can wimp out and look here ). Then when you want to classify an unknown vector, compute the "closest" average feature vector and return its class as your answer. There are more or less sophisticated ways to measure closness, but for now, just use the Euclidean distance (sqrt of sum of squares of vector element differences, or easier the sqrt of the dot product of the difference vector with itself).

In "nearest neighbor" classification, you compute this distance from your unknown feature vector to EVERY feature vector you have seen so far. So this means lots of memory and flops. For N classes you keep N matrices, each filled with feature vectors for the class, and you choose the class with the closest feature vector to your incoming unknown.

There is also k-nearest neighbor classification and several other methods, (including, again, "neural" nets). The real key to success is getting the right FEATURES that will separate your clusters nicely.

You might want to create your clusters in a separate phase that writes the relevant information (one vector per class or many) to disk files. Then you can use them to do classification of unknown texture patches. At all stages you should make sure you understand your output and believe it is correct.

What to Hand In

I think you should have the idea by now. For the kinematics part, Explain what you did, show off your command of Matlab and its graphics. Demonstrate your simulation (Matlab plots are good here) doing forward kinematics. Describe your inverse kinematics algorithm or (better) demonstrate something that works and explore what problems it runs into.

For the textures, start with 2 classes and a few features (maybe just one?) and see how it goes. Plot things out to get a feel for the shape of the clusters, the usefulness of your features, etc. Move up to more and more-difficult texture discriminations, try (and invent) features and show how they work. A good final sort of statistic is a "confusion matrix" that has the texture classes as row and column labels, and each element contains the number of times that the row-label texture was classified as the column-label texture. Ideally you get a matrix with N down the main diagonal, for N trials.

Due Date and Grading

Due Date: As per WebCT assignment.

Total of 140 points: Content: 100 points, Presentation: 40 points.

[ csc242 Home ]