Matrices are usually introduced as rectangular arrays of numbers, along with what looks like a sensible notion of addition, and a somewhat peculiar notion of multiplication.
Individual matrices are often denoted by upper case letters, and have an associated size. A matrix A with m rows and n columns is said to be m-by-n, where m and n are positive integers. The 1-by-1 matrix is defined, and acts in some ways, like a single number. 1-by-n and n-by-1 matrices are often used to represent row and column vectors respectively. Vectors, even though they can be thought of as matrices are often denoted by lower case letters, since there is often an important semantic distinction between objects represented by vectors, and objects represented by full matrices.
The individual entries of a matrix are generally numbers, or scalars in mathematical parlance. They are most often referred to by subscripted lower case letters, for example, aij refers to the element of matrix A at row i, column j. Other notations such as Aij and A[i,j] are also sometimes used. The "numbers" may be integer, rational, real, or complex values (and are sometimes more exotic objects). In this course, we generally consider matrices to contain either real or complex values.
[ 7 3 2 ] [-1.23 2.71 6.43 8.34 ] [ 2 + 3j ] [ 1 17 16 ] [ 2.22 3.14 -2.71 1.41 ] [ 8 - 7j ] [ 4 23 13 ] [ 7.66 -1.77 -1.49 3.27 ] [-4 + 5j ]
Addition is defined between matrices of the same size. Specifically, the addition of two m-by-n matrices produces a third m-by-n matrix whose elements are the sum of the numbers at corresponding locations in the addend matrices. That is, C = A + B is defined by
This is sometimes referred to as "pointwise" addition or "addition by components". Because it is defined by addition of components, matrix addition is commutative, A + B = B + A, and associative, (A + B) + C = A + (B + C), just like ordinary addition of numbers.
[ 3 4 5 ] [ 2 -1 -1 ] [ 5 3 -4 ] [-1 7 -2 ] + [ 7 0 -4 ] = [ 6 7 -6 ] [ 5 0 -3 ] [ 4 8 5 ] [ 9 8 2 ]
Pointwise multiplication can be defined similarly, but
this turns out not to be a very useful concept.
Multiplication of a matrix by a scalar is a more frequently
used concept, and is achieved by multiplying every matrix
element by the scalar. That is
C = kA
where
C and
A are matrices and
k is a scalar,
is defined by
[ 1.0 2.0 ] [ 3.14 6.28 ] 3.14 * [ 3.0 -1.0 ] = [ 9.42 -3.14 ] [-3.0 -2.0 ] [-9.42 -6.28 ]
Multiplication of matrices by matrices is carefully defined so that matrix multiplication can be used to represent systems of linear equations. It so happens that once this is done, matrix multiplication can be used to represent other relationships as well. To provide some intuition behind the definition, recall that linear equations take the general form
Note that the left side can be viewed as the dot product between two n-component vectors a and x. Matrix multiplication is defined so that element (i,j) of the product AB is the dot product of the i-th row of A with the j-th column of B. Note that his implies that the number of columns of A must equal the number of rows of B. Thus if A is an m-by-k matrix, and B is a k-by-n matrix, then the product C = AB is an m-by-n matrix defined by
[ 1 2 3 ] [ 1 1 1 ] [-2 3 8 ] [ 3 2 1 ] * [ 0 1 2 ] = [ 2 5 8 ] [ 1 0 1 ] [-1 0 1 ] [ 0 1 2 ]
[ 1 ] [ 1 2 3 ] * [ 2 ] = [ 14 ] [ 3 ]
[ 1 ] [ 1 2 3 4 5 ] [ 2 ] [ 2 4 6 8 10 ] [ 3 ] * [ 1 2 3 4 5] = [ 3 6 9 12 15 ] [ 4 ] [ 4 8 12 16 20 ] [ 5 ] [ 5 10 15 20 25 ]
The second and third examples represent what are sometimes called respectively, inner and outer products of vectors (in this case, with themselves). Calling the second example an inner product is a slight misnomer, as the result is a 1-by-1 matrix, not a scalar, which is subtly different (see the discussion below). In the third example, the rows (and columns) are multiples of each other. This reflects the fact that we really did not start with much information, and even though we produced a big matrix, it is, in a sense that can be made precise, redundant.
With this definition of matrix multiplication, the form AX = C (often written Ax = c), where A is an m-by-n matrix of coefficients, X is an n-by-1 matrix representing a column vector of unknowns, and C is an m-by-1 matrix representing a column vector of constant terms is defined and generates a system of m equations in n unknowns when the multiplication is carried out symbolically.
Matrix multiplication is associative,
(AB)C = A(BC)
(try proving this for an interesting exercise), but it is NOT
commutative, i.e.,
AB is not, in general, equal to
BA, or
even defined, except in special circumstances.
One of these circumstances is 1-by-1 matrixes for which addition
and multiplication act just like addition and multiplication of the
contained element. A formal way of stating this is to say that the
algebraic systems of addition and multiplication of scalars, and
addition and multiplication of 1-by-1 matrices of those scalars
are isomorphic under the natural mapping.
Note that this is not the same as saying that 1-by-1 matrices are the same as scalars. They are not. Specifically, multiplication of an arbitrary matrix by a scalar is defined, but multiplication of an arbitrary matrix by a 1-1 matrix is not (the sizes will not match in general). However, note that a column vector C can be multiplied on the right by a 1-by-1 matrix [k], C[k], and a row vector R can be multiplied on the left, [k]R. The result in this case corresponds to scalar multiplication. This sometimes results in shorthand notation where it looks as if a 1-by-1 matrix has been treated as a scalar. In fact, the whole issue is sometimes swept under the rug, and expressions such as xty, where x and y are column vectors of the same size, are used to represent the dot product x ⋅ y which has a scalar value. The "t" superscript means matrix transpose, which is what you get when you exchange the rows and columns of a matrix, so an m-by-n marix becomes an n-by-m.
The definitions of matrix addition and multiplication allow square matrices of the same size to be added and multiplied to produce a square matrix also of the same size. This suggests the idea that matrices could be considered as a generalization of the concept of a number. We can follow this up by seeing how many analogous properties we can find.
[ 1 0 0 ] [ 0 1 0 ] [ 0 0 1 ]The identity matrix also preserves vectors, Ix = x, or any other matrix for which multiplication with it is defined.
Taking all the above properties together, we note that with the exception of commutative multiplication and some additional elements without inverses, the set of square matrices of a given size acts just like numbers with respect to the basic algebraic operations of addition and multiplication. Mathematicians have taken note of this, (and many other examples of sets of objects plus operations with similarly analogous structure that they have discovered), and have developed an entire area of mathematics devoted to them. This area is called abstract algebra, and it is at the foundation of modern physics, and a big chunk of modern mathematics. Below, we mention a few of the concepts by way of general interest (don't worry, you won't be asked to define them on the exam).
For example, any set and associated binary operation that satisfies closure, associativity, identity element, and unique inverse is referred to as a group. If the operation is commutative, the group is an abelian group. Familiar groups are the integers, rationals, reals, and complex numbers (and integers mod n) under addition. Square matrices of a given size are thus a group under addition.
A system with a set and two binary operations (addition and multiplication) that satisfies all of our conditions plus commutivity of multiplication and unique multiplicative inverse for all elements except the additive identity (0) is referred to as a field. Familiar fields are the rationals, the reals, and the complex numbers with multiplication and addition, but not the integers (why?). Fields turn out to be relatively rare compared to groups among mathematical systems that come up in engineering, physics, and Euclidean geometry. The square matrices are almost, but not quite a field. They are, in fact an example of an algebraic structure called a ring, which requires all the field properties except commutative multiplication and multiplicative inverses. The non-singular matrices are even closer (lacking only commutative multiplication), and belong to an algebraic class referred to as a division ring.
Mathematicians are interested in proving theorems about groups, rings, fields, etc. in general. The payoff is, that any property that can be proved for, say, rings in general, automatically applies to anything that is a ring. Since matrices have a lot of the abstract algebraic properties of numbers, we might look at other operations and theorems that are defined for numbers and ask if there are useful analogs in the matrix domain. For example, what about the exponential function of a matrix (e to a matrix power)? If we think about the exponential as a generalization of repeated multiplication, the idea doesn't even seem to make sense (e multiplied by itself a matrix number of times??). However, it turns out that a matrix exponential can not only be defined (in a rather direct manner that will make sense to you if you have had a semester of calculus), but represents the solution of some important equations in multi-dimensional spaces that are exactly analogous to equations with exponential solutions in one dimension.
Or what about the square root of a matrix? Is it uniquely defined? (probably not, given that it is not unique even for real numbers). Is it usefully defined at all? If so, how many could there be? Can matrix calculus be defined? Are three (or more) dimensional things like matrices a useful concept? (yes, they go under the name of tensors, and you will encounter them eventually). What are their properties? ...
OK, relax. We are not intending to go off where the mathematicians go, at least not very far, but the point is that matrices (and complex numbers, and quaternions, and many other mathematical constructs that may seem exotic and strange when you first encounter them, are not just arbitrary heuristics for solving a specific problem (e.g systems of linear equations) but often turn out to have a lot of structure, much of which may be somewhat familiar due to analogs with numbers (or vectors, or matrices once you have developed a feel for them).
To return to the concrete, the convention we have developed for the representation of linear systems, Ax = c, has an algebraic form that is identical to the simple equation ax = c with x a simple variable. A solution to the latter can be written in closed form, x = c/a = a-1c. If A is nonsingular we can multiply both sides of the matrix equation on the left by A-1: A-1Ax = x= A-1c. So if we can find the inverse matrix, we can solve the system by direct matrix multiplication.
It turns out that finding the inverse is as much work as solving the system by Gaussian reduction (in fact, a direct modification of Gaussian reduction is a standard way of finding the inverse), so we don't save any computational effort. However, algebraic manipulations of equations involving matrices and vectors can simplify the form before any computation is done, just as with ordinary equations. This can save considerable computational effort, and may also generate a representation that is more easily understandable, or displays structure not immediately evident in the original form. A good example of this is the matrix notation itself, which is marvelously compact, hiding a lot of detail that is irrelevant until it is time to make a final computation, but which makes structures and operations in multi-dimensional space more understandable by expressing them in a form that is analogous to the familiar, and more intuitively understandable one-dimensional form.
There are a number of concepts and characteristics that come up repeatedly when dealing with matrices and linear systems. Some of these are listed below. Some of these have already been mentioned above. Pointers into Wikipedia are given for those interested in learning more.