Write a simple function gauss_reduce() with prototype
to solve a system of linear equations using direct Gaussian reduction. This function takes a square matrix of parameters, and a vector of constants, and returns a solution vector. (There are other possibilies, e.g. a function that takes a single matrix representing both the coefficients and the constant vector, but let's not worry about that for now). Your function should be able to take systems of any size (up to practical limits). It should check that the matrix is square, and that the constant vector has the appropriate number of elements, printing an error message and returning the zero vector if these solvability conditions are violated. Your program should also print an error message and return the zero if it runs into a zero pivot which might indicate a singular system, though does not necessarily imply that.function solution_vec = gauss_reduce(param_mat, const_vec)
In general zero pivots may not be obvious, given the initial coefficient matrix. If the top left element, the first pivot, is zero that IS obvious, but the process of reducing the first row may set the (2,2) element (the second pivot) to zero when it was non-zero before, and so on. The potential pivot element must be checked as the first step in the column-reduction operation (the reduce_column() function described below).
What to turn in for this part:
Use rand and ones to make a parameter matrix A, a 4x4 random matrix with elements between -1 and 1. It turns out that it's really unlikely these matrices will cause any zero-pivot problems. Also make constant vector B, a 4x1 similarly-random column vector. Run your gaussian-elimination solver on the (A, B) system to get your solution vector X. Check X two ways:
Why write your own solver if one exists already? First, it's a well-known coding exercise at about the right level (it's assigned in the text, for instance). More importantly, you have actually written a routine that's not built-in, and that's a non-pivoting Gaussian elimination routine. "Normally" we always want to pivot, but the extra credit part of this assignment (below) tries to establish exactly why, since there seem still to be open questions about it. It's really not unusual to have to build your own version of commonly-available functions just so you can vary, instrument (collect statistics), and experiment with them.
Now swap the first two rows of A and B to get Aswap and Bswap and try your gaussian-elimination program on the Aswap and Bswap system. It should run OK. Recall that row-swapping is an Elementary Row Operation (ERO) that simply re-arranges the order of the same equations, so the solution to the system is the same as the unswapped system. Congratulations, you've gotten over a zero-pivot problem. Remember the answer (X) vector.
One bad thing about built-in Matlab operations is we have no idea what's
going on inside. One good thing is that they usually are reliable
and "have seen it all" and are prepared for nasty inputs. Since
X = A-1B, in some metaphorical sense
So, try Xswap = Aswap\Bswap;, which should work since you fixed the zero-pivot problem. Then similarly try X = A\B. What can you conclude about the built-in \ operator?
In fact, your row swap manually did one partial pivot operation on the first column of A. Partial pivoting is the use of row swaps to avoid zero pivots. Before eliminating the below-diagonal elements of any column starts, partial pivoting usually swaps the maximum column element in or below the main diagonal of the column up to the main diagonal to act as the pivot. The next (extra credit) part of the assignment has you implement partial pivoting and investigate whether it improves, as is often claimed but not proved, the numerical accuracy of solutions.
Coding Hints
Your gaussian-elimination function should use secondary
functions called by your main function as a structuring mechanism.
A good place to start is to note that after the initial parameter
checking, the process consists of two main steps.
First, reduce the coefficient matrix to upper triangular form
(modifying the constant vector in parallel).
Second, perform the back-substitution to obtain the solution vector
from the upper triangular matrix and the (modified) constant vector.
We can write a secondary function to perform each step.
Our main function thus starts out looking like this:
function solution_vec = gauss_reduce(param_mat, const_vec) % check for consistent size of param_mat and const_vec ... % reduce coefficient matrix to upper triangular form, modifying the % constant vector appropriatly [ut_mat, new_const_vec] = ut_reduce(param_mat, const_vec); % Compute the solution vector using back substitution solution_vec = back_subst(ut_mat, new_const_vec); % we are doneend
The ut_reduce() function uses its own subsidiary functions.
Specifically, you should write a function called
reduce_column() with prototype
that returns a modified matrix (and constant vector) in which the input matrix has been reduced so that all the elements below the (col,col) diagonal element are zero. Initially it checks to see the (col, col) element is NOT zero, of course! This can be used iteratively to modify the original matrix and constant vector to produce an upper triangular form. Note that using a function to modify the matrix and vector passed as parameters in the following way,function [new_mat, new_const_vec] = reduce_column(param_mat, const_vec, column)
[cur_mat, cur_vec] = reduce_column(cur_mat, cur_vec, cur_col);is perfectly legitimate (as long as you don't need the old partial solutions) and a good way of structuring the process.
The reduce_column() function might itself call yet another function
to reduce a specified column in a row to 0 by adding a multiple of
another row, returning again, both a modified coefficient matrix and a
modified constant vector. The prototype would look like
This function adds a multiple of row_added to row_reduced so that the specified col in row_reduced is 0. The same operation is carried out on the corresponding position of the constant vector. (Basically like the "ERO" Attaway describes at the bottom of page 343)function [new_mat, new_const_vec] = reduce_row_at_col(param_mat, const_vec, ... col, row_added, row_reduced);
The back-substitution function can similarly be constructed using a
subsidiary function that takes the upper triangular matrix, the
(modified) constant vector, and a partially-filled-in-from-the-end
solution vector, (entries from col + 1 to the end already known)
and produces a modified partial solution vector with entry col now filled in.
The prototype would look like
Note that if you call for a column to be filled without previously filling in all the higher columns, the function will probably not work as desired, so you need to be a little careful how you use it. Since it is your function, and not one you are publishing to the world, this is OK. Even so, you should leave yourself a note in comments.function new_part_solution_vec = back_subst_for_col(ut__mat, new_const_vec,... column, part_solution_vec)
Despite all the above verbiage, the amount of programming needed is small. CB did down through Dessert (all of Main Course plus partial pivoting, not the experiments) in 40 lines (32 for elimation with pivoting, 8 for back-substitution). He didn't use exactly these functions, but they are similar in spirit and small. Most are 4-liners; there's a 7 and a couple of 6's, longest is 8 for back-sub. This doesn't count his dense and helpful comment lines of course. There are some terse but not-well-structured solutions on the web too.
A function to swap a specified pair of rows (modifying both the coefficient matrix and constant vector) with prototypefunction [new_mat, new_const_vec] = pivot(param_mat, const_vec, col);
will be useful.function [new_mat, new_vec] = swap_rows(mat, vec, row1, row2);
Also,
write a function with prototype
to generate random test cases of a specified size n, specifically coefficient matrices and constant vectors with values that are floating point values (not integers) between -100.0 and 100.0.function [param_mat, const_vec] = random_test_case(n)
Now that you've got your basic tools, the research begins.
Of course you only care about statistics over those 100-long trials. For each size of perturbation you're presumably interested in the difference pivoting makes to the accuracy of the answers. So we'd expect a total of maybe 20 numbers to be your final output: something like the mean and standard deviation of the absolute value of the 100 errors for each of the 5 perturbation sizes and two pivoting conditions.
Twenty numbers still sounds like a lot, so a good idea is to make one plot with two data series plotted. One is means and std error-bars (matlab can plot those, use help or the documentation) for pivoting. The other gives the values for the no-pivoting case. So we'll see a single visual with all the information, readily comparable...each "data point" represents 100 experiments or so.
Now we think it's pretty likely that you won't get a significant result (interesting differences in the mean and stdevs) with that perturbation in the range [-1.0, 1.0], so if not try smaller perturbations -- range [-0.01, 0.01]? [-0.00001, 0.00001]? etc. to see if you can uncover anything with them. If you're getting into the 160 spirit, you'll recognize a superb opportunity to exploit this idea and just go ahead and run a bunch of perturbations, each smaller by a factor of 10, and make a semilog plot of your results. This is not only interesting, but would add substantial visual 'reader appeal' (or 'grader appeal, if you care about that) to your writeup.
A small aside. Make sure that an error exit from your simple program because it encountered a zero pivot (or other reason) does not contaminate the data. The probability of a zero pivot occuring is quite low, but if it did, the zero return vector would bollix the mean. Re-run the experiment (with new random values) if this ever happens. Actually, if you find this occuring on random inputs, check your programs (and random data generation) for errors, as the probability is really extremely low.
Test both the simple program and the version using partial pivoting. How do the mean and standard deviation of the error compare to that of the (mostly) well conditioned random systems? How does the effect of partial pivoting compare? Again, if a random 'noise' addition in the 1% range doesn't yield any discernable differences in behavior, try a smaller range... 0.01% ? 0.00001%? etc. No guarantees, but it IS research and it DOES give you more to write about and it's easy to do. Again again, using a bigger set of random ranges, probably multiples of 10 as above, is a chance to show off your plotting prowess and make a scientific point visually, always better than a table or a bunch of numbers stuck in a paragraph of text.
For more, repeat the pertubation experiment (the first one, not the one above using ill-conditioned matrices) using 100 randomly generated systems of size 5, 10, 20, 50, 100, [200, 500, 1000] (the last 3 might take too much time, depending on your implementation). Is the mean error correlated with the size of the system? Attempt to explain.
See the Universal Hand-In Guide.
Briefly, for the Code component of this assignment, you'll need a .zip (not .rar) archive with code files and README.
Submit the writeup as a single, non-zipped .PDF file.
Submit before the drop-dead date for any credit and before the due date for partial-to-full (or extra!) credit.
Check immediately to see BB got what you sent!