Matrix Algebra Review
Matrix Algebra ReviewMatrix Algebra: A Review
The Prerequisites Checklist page on the Department of Statistics website lists a number of courses that require a working knowledge of Matrix algebra as a prerequisite. Students who do not have this foundation or have not reviewed this material within the past couple of years will struggle with the concepts and methods that build on this foundation. The courses that require this foundation include:
 STAT 414  Introduction to Probability Theory
 STAT 501  Regression Methods
 STAT 504  Analysis of Discrete Data
 STAT 505  Applied Multivariate Statistical Analysis
Review Materials
Many of our returning, working professional students report that they had taken courses that included matrix algebra topics but often these courses were taken a number of years ago. As a means of helping students assess whether or not what they currently know and can do will meet the expectations of instructors in the courses above, the online program has put together a brief review of these concepts and methods. This is then followed by a short selfassessment exam that will help give you an idea if you still have the necessary background.
SelfAssessment Procedure
 Review the concepts and methods on the pages in this section of this website. Note the courses that certain sections are aligned with as prerequisites:
STAT 414 STAT 501 STAT 504 STAT 505 M.1  Matrix Definitions Required Required Required Required M.2  Matrix Arithmetic Required Required Required Required M.3  Matrix Properties Required Required Required Required M.4  Matrix Inverse Required Required Required Required M.5  Advanced Topics Recommended Recommended Recommended 5.1, 5.4, Required
5.2, 5.3, Recommended  Download and complete the SelfAssessment Exam
 Review the SelfAssessment Exam Solutions and determine your score.
Students who score below 70% (fewer than 21 questions correct) should consider further review of these materials and are strongly encouraged to take a course like MATH 220 or an equivalent course at a local college or community college.
If you have struggled with the concepts and methods that are presented here, you will indeed struggle in the courses above that expect this foundation.
Please Note: These materials are NOT intended to be a complete treatment of the ideas and methods used in Matrix algebra. These materials and the selfassessment are simply intended as simply an 'early warning signal' for students. Also, please note that completing the selfassessment successfully does not automatically ensure success in any of the courses that use these foundation materials.
M.1 Matrix Definitions
M.1 Matrix Definitions Matrix

A matrix is a rectangular collection of numbers. Generally, matrices are denoted as bold capital letters. For example:
\[A = \begin{pmatrix} 1 & 5 & 4\\
2 & 5 & 3 \end{pmatrix}\]
A is a matrix with two rows and three columns. For that reason, it is called a 2 by 3 matrix. This is called the dimension of a matrix.
 Dimension

The dimension of a matrix is expressed as number of rows × number of columns. So,
\[B = \begin{pmatrix} 1 & 5 & 4 \\ 5 & 3 & 8 \\ 1 & 5 & 4 \\ 2 & 5 & 3 \end{pmatrix}\]
B is a 4 × 3 matrix. It is common to refer to elements in a matrix by subscripts, like so.
\[B = \begin{pmatrix} b_{1,1} & b_{1,2} & b_{1,3}\\ b_{2,1} & b_{2,2} & b_{2,3}\\ b_{3,1} & b_{3,2} & b_{3,3}\\ b_{4,1} & b_{4,2} & b_{4,3} \end{pmatrix}\]
With the row first and the column second. So in this case, \(b_{2,1} = 5\) and \(b_{1,3} =4\).
 Vector

A vector is a matrix with only one row (called a row vector) or only one column (called a column vector). For example:
\[C = \begin{pmatrix} 2 & 7 & 3 & 5 \end{pmatrix}\]
C is a 4 dimensional row vector.
\[D = \begin{pmatrix} 2 \\9 \\3 \\3\\ 6\\ \end{pmatrix}\]
D is a 5 dimensional column vector. An "ordinary" number can be thought of as a 1 × 1 matrix, also known as a scalar. Some examples of scalars are shown below:
\[ E = \pi \]
\[ F = 6 \]
M.2 Matrix Arithmetic
M.2 Matrix ArithmeticTranspose a Matrix
To take the transpose of a matrix, simply switch the rows and column of a matrix. The transpose of \(A\) can be denoted as \(A'\) or \(A^T\).
For example
\[A = \begin{pmatrix} 1 & 5 & 4 \\ 2 & 5 & 3 \end{pmatrix}\]
\[A' = A^T = \begin{pmatrix} 1 & 2\\ 5 & 5\\ 4 & 3 \end{pmatrix}\]
If a matrix is its own transpose, then that matrix is said to be symmetric. Symmetric matrices must be square matrices, with the same number of rows and columns.
One example of a symmetric matrix is shown below:
\[ A = \begin{pmatrix} 1 & 5 & 4 \\ 5 & 7 & 3\\ 4 & 3 & 3 \end{pmatrix} = A' = A^T \]
Matrix Addition
To perform matrix addition, two matrices must have the same dimensions. This means they must have the same number of rows and columns. In that case simply add each individual components, like below.
For example
\[A + B = \begin{pmatrix} 1 & 5 & 4 \\ 2 & 5 & 3 \end{pmatrix} + \begin{pmatrix} 8 & 3 & 4 \\ 4 & 2 & 9 \end{pmatrix} = \begin{pmatrix} 1 + 8 & 5  3 & 4  4 \\ 2 + 4 & 5 2 & 3 + 9 \end{pmatrix} = \begin{pmatrix} 9 & 8 & 0\\ 6 & 3 & 12 \end{pmatrix}\]
Matrix addition does have many of the same properties as "normal" addition.
\[A + B = B + A\]
\[A + (B + C) = (A + B) + C\]
In addition, if one wishes to take the transpose of the sum of two matrices, then
\[A^T + B^T = (A+B)^T \]
Matrix Scalar Multiplication
To multiply a matrix by a scalar, also known as scalar multiplication, multiply every element in the matrix by the scalar.
For example...
\[ 6*A = 6 * \begin{pmatrix} 1 & 5 & 4\\ 2 & 5 & 3 \end{pmatrix} = \begin{pmatrix} 6 * 1 & 6 *5 & 6 * 4\\ 6 * 2 & 6 *5 & 6 * 3 \end{pmatrix} = \begin{pmatrix} 6 & 30 & 24 \\ 12 & 30 & 18 \end{pmatrix}\]
To multiply two vectors with the same length together is to take the dot product, also called inner product. This is done by multiplying every entry in the two vectors together and then adding all the products up.
For example, for vectors x and y, the dot product is calculated below
\[ x \cdot y = \begin{pmatrix} 1 & 5 & 4 \end{pmatrix} * \begin{pmatrix} 4 & 2 & 5 \end{pmatrix} = 1*4 + (5)*(2) + 4*5 = 4+10+20 = 34\]
Matrix Multiplication
To perform matrix multiplication, the first matrix must have the same number of columns as the second matrix has rows. The number of rows of the resulting matrix equals the number of rows of the first matrix, and the number of columns of the resulting matrix equals the number of columns of the second matrix. So a 3 × 5 matrix could be multiplied by a 5 × 7 matrix, forming a 3 × 7 matrix, but one cannot multiply a 2 × 8 matrix with a 4 × 2 matrix. To find the entries in the resulting matrix, simply take the dot product of the corresponding row of the first matrix and the corresponding column of the second matrix.
For example
\[ C*D = \begin{pmatrix} 3 & 9 & 8\\ 2 & 4 & 3 \end{pmatrix} * \begin{pmatrix} 7 & 3\\ 2 & 3\\ 6 & 2 \end{pmatrix} \]
\[ C*D = \begin{pmatrix} 3*7 + (9)*(2) + (8)*6 & 3*(3) + (9)*3 + (8)*2 \\ 2*7 + 4*(2) + 3*6 & 2*(3) + 4*3 + 3*2 \end{pmatrix}\]
\[ C*D = \begin{pmatrix} 21 + 18  48 &  9  27  16 \\14  8 + 18 &  6 + 12 + 6 \end{pmatrix} = \begin{pmatrix} 9 &  52\\ 24 & 12 \end{pmatrix} \]
Matrix multiplication has some of the same properties as "normal" multiplication , such as
\[ A(BC) = (AB)C\]
\[A(B + C) = AB + AC\]
\[(A + B)C = AC + BC\]
However matrix multiplication is not communicative. That is to say A*B does not necessarily equal B*A. In fact, B*A often has no meaning since the dimensions rarely match up. However, you can take the transpose of matrix multiplication. In that case \((AB)^T = B^T A^T\).
M.3 Matrix Properties
M.3 Matrix PropertiesIdentity Matrices
An identity matrix is a square matrix where every diagonal entry is 1 and all the other entries are 0. The following two matrices are both identity matrices and diagonal matrices.
\[ I_3 = \begin{pmatrix} 1 & 0 & 0 \\0 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix} \]
\[ I_4 = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix} \]
They are called identity matrices, because any matrix multiplied with an identify matrix equals itself. The diagonal entries of a matrix are the entries where the column and row number are the same. \(a_{2,2}\) is a diagonal entry but \(a_{3,5}\) is not. The trace of a n × n matrix is the sum of all the diagonal entries. In other words, for n × n matrix A, \(trace(A) = tr(A) = \sum_{i=1}^{n} a_{i,i}\) For example:
\[ trace(F) = tr(F) = tr \begin{pmatrix} 1 & 3 & 3\\ 0 & 6 & 7\\ 5 & 0 & 1 \end{pmatrix} = 1 + 6 + 1 = 8 \]
The trace has some useful properties, namely that for same size square matrices A and B and scalar c,
\[ tr(A) = tr(A^T)\] \[ tr(A + B) = tr(B + A) = tr(A) + tr(B)\] \[ tr(AB) = tr(BA)\] \[ tr(cA) = c*tr(A)\]
Determintants
The determinate of a square, 2 × 2 matrix A is
\[ det(A) = A = \begin{vmatrix} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \end{vmatrix} = a_{1,1} * a_{2,2}  a_{1,2}*a_{2,1}\]
For example
\[ det(A) = A = \begin{vmatrix} 5 & 2 \\ 7 & 2 \end{vmatrix} = 5*2  2*7 = 4\]
For a 3 × 3 matrix B, the determinate is
\[det(B) = B = \begin{vmatrix} b_{1,1} & b_{1,2} & b_{1,3}\\ b_{2,1} & b_{2,2} & b_{2,3}\\ b_{3,1} & b_{3,2} & b_{3,3} \end{vmatrix} = b_{1,3} \begin{vmatrix} b_{2,1} & b_{2,2}\\ b_{3,1} & b_{3,2} \end{vmatrix}  b_{2,3} \begin{vmatrix} b_{1,1} & b_{1,2}\\ b_{3,1} & b_{3,2} \end{vmatrix} + b_{3,3} \begin{vmatrix} b_{1,1} & b_{1,2} \\b_{2,1} & b_{2,2} \end{vmatrix}\]
\[ det(B) = b_{1,3}(b_{2,1} b_{3,2}  b_{2,2} b_{3,1} )  b_{2,3}(b_{1,1} b_{3,2}  b_{1,2} b_{3,1} ) + b_{3,3}(b_{1,1} b_{2,2}  b_{1,2} b_{2,1} ) \]
For example:
\[det(B) = B = \begin{vmatrix} 4 & 0 & 1\\2 & 2 & 3 \\ 7 & 5 & 0 \end{vmatrix} = 1 \begin{vmatrix} 2 & 2\\ 7 & 5 \end{vmatrix}  3 \begin{vmatrix} 4 & 0 \\7 & 5 \end{vmatrix} + 0 \begin{vmatrix} 4 & 0 \\ 2 & 2 \end{vmatrix}\]
\[ det(B) = 1 (2*5  (2)*7) + 3(4*5  0 *7)  0 (4*(2)  0 * 2) = 1*24  3*20 + 0 *(8) = 24  60 = 84 \]
In general, to find the determinate of a n \(\times\) n matrix, choose a row or column like column 1, and take the determinates of the "minor" matrices inside the original matrix, like so:
\[det(C) = C = \begin{vmatrix} c_{1,1} & c_{1,2} & \ldots & c_{1,n}\\ c_{2,1} & c_{2,2} & \ldots & c_{2,n}\\ \vdots & \vdots & \ddots & \vdots \\c_{n,1} & c_{.,2} & \ldots & c_{n,n} \end{vmatrix}\]
\[det(C) = (1)^{1+1} c_{1,1} \begin{vmatrix} c_{2,2} & \ldots & c_{2,n}\\ \vdots & \ddots & \vdots \\ c_{n,2} & \ldots & c_{n,n} \end{vmatrix} + (1)^{2+1} c_{2,1} \begin{vmatrix} c_{1,2} & \ldots & c_{1,n}\\ c_{3,2} & \ldots & c_{3,n}\\ \vdots & \ddots & \vdots\\ c_{n,2} & \ldots & c_{n,n} \end{vmatrix} + \ldots\]
\[ \ldots + (1)^{n+1} c_{n,1} \begin{vmatrix} c_{1,2} & \ldots & c_{1,n}\\ \vdots & \ddots & \vdots \\ c_{n1,2} & \ldots & c_{n1,n} \end{vmatrix} \]
This is known as Laplace's formula,
\[ det(A) = \sum_{j=1}^{n} (1)^{i+j} a_{i,j} det(A_{i,j}) = \sum_{i=1}^{n} (1)^{i+j} a_{i,j} det(A_{i,j}) \]
For any i, j, where \(A_{i,j}\) is matrix A with row i and column j removed. This formula works whether one goes by rows, using the first formulation, or by columns, using the second formulation. It is easiest to use Laplace's formula when one chooses the row or column with the most zeroes.
Matrix Determinant Properties
The matrix determinate has some interesting properties.
\[det(I) = 1\]
where I is the identity matrix.
\[det(A) = det(A^T)\]
If A and B are square matrices with the same dimensions, then
\[ det(AB) = det(A)*det(B)\] and if A is a n × n square matrix and c is a scalar, then
\[ det(cA) = c^n det(A)\]
M.4 Matrix Inverse
M.4 Matrix Inverse Inverse of a Matrix

The matrix B is the inverse of matrix A if \(AB = BA = I\). This is often denoted as \(B = A^{1}\) or \(A = B^{1}\). When taking the inverse of the product of two matrices A and B,
\[(AB)^{1} = B^{1} A^{1}\]
When taking the determinate of the inverse of the matrix A,
\[ det(A^{1}) = \frac{1}{det(A)} = det(A)^{1}\]
Note that not all matrices have inverses. For a matrix A to have an inverse, that is to say for A to be invertible, A must be a square matrix and \(det(A) \neq 0\). For that reason, invertible matrices are also called nonsingular matrices.
Two examples are shown below
\[ det(A) = \begin{vmatrix} 4 & 5 \\ 2 & 1 \end{vmatrix} = 4*15*2 = 14 \neq 0 \]
\[ det(C) = \begin{vmatrix} 1 & 2 & 1\\ 5 & 3 & 2 \\ 6 & 0 & 6 \end{vmatrix} = 2 \begin{vmatrix} 5 & 2 \\ 6 & 6 \end{vmatrix} + 3 \begin{vmatrix} 1 & 1\\ 6 & 6 \end{vmatrix} + 0 \begin{vmatrix} 1 & 1\\ 5 & 2 \end{vmatrix}\]
\[ det(C)=  2(5*62*6) + 3(1*6(1)*6)  0(1*2(1)*5) = 0 \]
So C is not invertible, because its determinate is zero. However, A is an invertible matrix, because its determinate is nonzero. To calculate that matrix inverse of a 2 × 2 matrix, use the below formula.
\[ A^{1} = \begin{pmatrix} a_{1,1} & a_{1,2}\\ a_{2,1} & a_{2,2} \end{pmatrix} ^{1} = \frac{1}{det(A)} \begin{pmatrix} a_{2,2} & a_{1,2} \\ a_{2,1} & a_{1,1} \end{pmatrix} = \frac{1}{a_{1,1} * a_{2,2}  a_{1,2}*a_{2,1}} \begin{pmatrix} a_{2,2} & a_{1,2} \\ a_{2,1} & a_{1,1} \end{pmatrix}\]
For example
\[ A^{1} = \begin{pmatrix} 4 & 5 \\ 2 & 1 \end{pmatrix} ^{1} = \frac{1}{det(A)} \begin{pmatrix} 1 & 5 \\ 2 & 4 \end{pmatrix} = \frac{1}{4*1  5*(2)} \begin{pmatrix} 1 & 5 \\ 2 & 4 \end{pmatrix} = \begin{pmatrix} \frac{1}{14} & \frac{5}{14} \\ \frac{2}{14} & \frac{4}{14} \end{pmatrix}\]
For finding the matrix inverse in general, you can use GaussJordan Algorithm. However, this is a rather complicated algorithm, so usually one relies upon the computer or calculator to find the matrix inverse.
M.5 Advanced Matrix Properties
M.5 Advanced Matrix Properties Orthogonal Vectors

Two vectors, x and y, are orthogonal if their dot product is zero.
For example
\[ e \cdot f = \begin{pmatrix} 2 & 5 & 4 \end{pmatrix} * \begin{pmatrix} 4 \\ 2 \\ 5 \end{pmatrix} = 2*4 + (5)*(2) + 4*5 = 810+20 = 18\]
Vectors e and f are not orthogonal.
\[ g \cdot h = \begin{pmatrix} 2 & 3 & 2 \end{pmatrix} * \begin{pmatrix} 4 \\ 2 \\ 1 \end{pmatrix} = 2*4 + (3)*(2) + (2)*1 = 862 = 0\]
However, vectors g and h are orthogonal. Orthogonal can be thought of as an expansion of perpendicular for higher dimensions. Let \(x_1, x_2, \ldots , x_n\) be mdimensional vectors. Then a linear combination of \(x_1, x_2, \ldots , x_n\) is any mdimensional vector that can be expressed as
\[ c_1 x_1 + c_2 x_2 + \ldots + c_n x_n \]
where \(c_1, \ldots, c_n\) are all scalars. For example:
\[x_1 =\begin{pmatrix}
3 \\
8 \\
2
\end{pmatrix},
x_2 =\begin{pmatrix}
4 \\
2 \\
3
\end{pmatrix}\]
\[y =\begin{pmatrix}
5 \\
12 \\
8
\end{pmatrix} = 1*\begin{pmatrix}
3 \\
8 \\
2
\end{pmatrix} + (2)* \begin{pmatrix}
4 \\
2 \\
3
\end{pmatrix} = 1*x_1 + (2)*x_2\]
So y is a linear combination of \(x_1\) and \(x_2\). The set of all linear combinations of \(x_1, x_2, \ldots , x_n\) is called the span of \(x_1, x_2, \ldots , x_n\). In other words
\[ span(\{x_1, x_2, \ldots , x_n \} ) = \{ v v= \sum_{i = 1}^{n} c_i x_i , c_i \in \mathbb{R} \} \]
A set of vectors \(x_1, x_2, \ldots , x_n\) is linearly independent if none of the vectors in the set can be expressed as a linear combination of the other vectors. Another way to think of this is a set of vectors \(x_1, x_2, \ldots , x_n\) are linearly independent if the only solution to the below equation is to have \(c_1 = c_2 = \ldots = c_n = 0\), where \(c_1 , c_2 , \ldots , c_n \) are scalars, and \(0\) is the zero vector (the vector where every entry is 0).
\[ c_1 x_1 + c_2 x_2 + \ldots + c_n x_n = 0 \]
If a set of vectors is not linearly independent, then they are called linearly dependent.
Example M.5.1
\[ x_1 =\begin{pmatrix} 3 \\ 4 \\ 2 \end{pmatrix}, x_2 =\begin{pmatrix} 4 \\ 2 \\ 2 \end{pmatrix}, x_3 =\begin{pmatrix} 6 \\ 8 \\ 2 \end{pmatrix} \]
Does there exist a vector c, such that,
\[ c_1 x_1 + c_2 x_2 + c_3 x_3 = 0 \]
To answer the question above, let:
\begin{align} 3c_1 + 4c_2 +6c_3 &= 0,\\ 4c_1 2c_2 + 8c_3 &= 0,\\ 2c_1 + 2c_2 2c_3 &= 0 \end{align}
Solving the above system of equations shows that the only possible solution is \(c_1 = c_2 = c_3 = 0\). Thus \(\{ x_1 , x_2 , x_3 \}\) is linearly independent. One way to solve the system of equations is shown below. First, subtract (4/3) times the 1st equation from the 2nd equation.
\[\frac{4}{3}(3c_1 + 4c_2 +6c_3) + (4c_1 2c_2 + 8c_3) = \frac{22}{3}c_2 = \frac{4}{3}0 + 0 = 0 \Rightarrow c_2 = 0 \]
Then add the 1st and 3 times the 3rd equations together, and substitute in \(c_2 = 0\).
\[ (3c_1 + 4c_2 +6c_3) + 3*(2c_1 + 2c_2 2c_3) = 3c_1 + 10 c_2 = 3c_1 + 10*0 = 0 + 3*0 = 0 \Rightarrow c_1 = 0 \]
Now, substituting both \(c_1 = 0\) and \(c_2 = 0\) into equation 2 gives.
\[ 4c_1 2c_2 + 8c_3 = 4*0 2*0 + 8c_3 = 0 \Rightarrow c_3 = 0 \]
So \(c_1 = c_2 = c_3 = 0\), and \(\{ x_1 , x_2 , x_3 \}\) are linearly independent.
Example M.5.2
\[ x_1 =\begin{pmatrix} 1 \\ 8 \\ 8 \end{pmatrix}, x_2 =\begin{pmatrix} 4 \\ 2 \\ 2 \end{pmatrix}, x_3 =\begin{pmatrix} 1 \\ 3 \\ 2 \end{pmatrix} \]
In this case \(\{ x_1 , x_2 , x_3 \}\)are linearly dependent, because if \(c = (1, 1, 2)\), then
\[c^T X = \begin{pmatrix}
1 \\
1\\
2
\end{pmatrix} \begin{pmatrix}
x_1 & x_2 & x_3
\end{pmatrix} = 1 \begin{pmatrix}
1 \\
8\\
8
\end{pmatrix}+ 1
\begin{pmatrix}
4 \\
2\\
2
\end{pmatrix}  2 \begin{pmatrix}
1 \\
3 \\
2
\end{pmatrix} =
\begin{pmatrix}
1*1 +1*42*1 \\
1*8+1*22*3 \\
1*8+1*22*2
\end{pmatrix}=
\begin{pmatrix}
0 \\
0 \\
0
\end{pmatrix}
\]
 Norm of a vector or matrix

The norm of a vector or matrix is a measure of the "length" of said vector or matrix. For a vector x, the most common norm is the \(\mathbf{L_2}\) norm, or Euclidean norm. It is defined as
\[ \ x \ = \ x \_2 = \sqrt{ \sum_{i=1}^{n} x_i^2 } \]
Other common vector norms include the \(\mathbf{L_1}\) norm, also called the Manhattan norm and Taxicab norm.
\[ \ x \_1 = \sum_{i=1}^{n} x_i \]
Other common vector norms include the Maximum norm, also called the Infinity norm.
\[ \ x \_\infty = max( x_1 ,x_2, \ldots ,x_n) \]
The most commonly used matrix norm is the Frobenius norm. For a m × n matrix A, the Frobenius norm is defined as:
\[ \ A \ = \ A \_F = \sqrt{ \sum_{i=1}^{m} \sum_{j=1}^{n} x_{i,j}^2 } \]
 Quadratic Form of a Vector

The quadratic form of the vector x associated with matrix A is
\[ x^T A x = \sum_{i = 1}^{m} \sum_{j=1}^{n} a_{i,j} x_i x_j \]
A matrix A is Positive Definite if for any nonzero vector x, the quadratic form of x and A is strictly positive. In other words, \(x^T A x > 0\) for all nonzero x.
A matrix A is Positive SemiDefinite or Nonnegative Definite if for any nonzero vector x, the quadratic form of x and A is nonnegative . In other words, \(x^T A x \geq 0\) for all nonzero x. Similarly,
A matrix A is Negative Definite if for any nonzero vector x, \(x^T A x < 0\). A matrix A is Negative SemiDefinite or Nonpositive Definite if for any nonzero vector x, \(x^T A x \leq 0\).
M.6 Range, Nullspace and Projections
M.6 Range, Nullspace and Projections Range of a matrix

The range of m × n matrix A, is the span of the n columns of A. In other words, for
\[ A = [ a_1 a_2 a_3 \ldots a_n ] \]
where \(a_1 , a_2 , a_3 , \ldots ,a_n\) are mdimensional vectors,
\[ range(A) = R(A) = span(\{a_1, a_2, \ldots , a_n \} ) = \{ v v= \sum_{i = 1}^{n} c_i a_i , c_i \in \mathbb{R} \} \]
The dimension (number of linear independent columns) of the range of A is called the rank of A. So if 6 × 3 dimensional matrix B has a 2 dimensional range, then \(rank(A) = 2\).
For example
\[C =\begin{pmatrix}
1 & 4 & 1\\
8 & 2 & 3\\
8 & 2 & 2
\end{pmatrix} = \begin{pmatrix}
x_1 & x_2 & x_3
\end{pmatrix}= \begin{pmatrix}
y_1 \\
y_2\\
y_3
\end{pmatrix}\]
C has a rank of 3, because \(x_1\), \(x_2\) and \(x_3\) are linearly independent.
 Nullspace
 p>The nullspace of a m \(\times\) n matrix is the set of all ndimensional vectors that equal the ndimensional zero vector (the vector where every entry is 0) when multiplied by A. This is often denoted as
\[N(A) = \{ v  Av = 0 \}\]
The dimension of the nullspace of A is called the nullity of A. So if 6 \(\times\) 3 dimensional matrix B has a 1 dimensional range, then \(nullity(A) = 1\).
The range and nullspace of a matrix are closely related. In particular, for m \(\times\) n matrix A,
\[\{w  w = u + v, u \in R(A^T), v \in N(A) \} = \mathbb{R}^{n}\]
\[R(A^T) \cap N(A) = \phi\]
This leads to the ranknullity theorem, which says that the rank and the nullity of a matrix sum together to the number of columns of the matrix. To put it into symbols:
\[A \in \mathbb{R}^{m \times n} \Rightarrow rank(A) + nullity(A) = n\]
For example, if B is a 4 \(\times\) 3 matrix and \(rank(B) = 2\), then from the ranknullity theorem, on can deduce that
\[rank(B) + nullity(B) = 2 + nullity(B) = 3 \Rightarrow nullity(B) = 1\]
 Projection
The projection of a vector x onto the vector space J, denoted by Proj(X, J), is the vector \(v \in J\) that minimizes \(\vert x  v \vert\). Often, the vector space J one is interested in is the range of the matrix A, and norm used is the Euclidian norm. In that case
\[Proj(x,R(A)) = \{ v \in R(A)  \vert x  v \vert_2 \leq \vert x  w \vert_2 \forall w \in R(A) \}\]
In other words
\[Proj(x,R(A)) = argmin_{v \in R(A)} \vert x  v \vert_2\]
M.7 GaussJordan Elimination
M.7 GaussJordan EliminationGaussJordan Elimination is an algorithm that can be used to solve systems of linear equations and to find the inverse of any invertible matrix. It relies upon three elementary row operations one can use on a matrix:
 Swap the positions of two of the rows
 Multiply one of the rows by a nonzero scalar.
 Add or subtract the scalar multiple of one row to another row.
For an example of the first elementary row operation, swap the positions of the 1st and 3rd row.
\[ \begin{pmatrix} 4 & 0 & 1 \\ 2 & 2 & 3 \\ 7 & 5 & 0 \end{pmatrix}\Rightarrow \begin{pmatrix} 7 & 5 & 0 \\ 2 & 2 & 3 \\ 4 & 0 & 1 \end{pmatrix} \]
For an example of the second elementary row operation, multiply the second row by 3.
\[ \begin{pmatrix} 4 & 0 & 1 \\ 2 & 2 & 3 \\ 7 & 5 & 0 \end{pmatrix} \Rightarrow \begin{pmatrix} 4 & 0 & 1 \\ 6 & 6 & 9 \\ 7 & 5 & 0 \end{pmatrix} \]
For an example of the third elementary row operation, add twice the 1st row to the 2nd row.
\[ \begin{pmatrix} 4 & 0 & 1 \\ 2 & 2 & 3 \\ 7 & 5 & 0 \end{pmatrix}\Rightarrow \begin{pmatrix} 4 & 0 & 1 \\ 10 & 2 & 1 \\ 7 & 5 & 0 \end{pmatrix} \]
Reducedrow echelon form
The purpose of GaussJordan Elimination is to use the three elementary row operations to convert a matrix into reducedrow echelon form. A matrix is in reducedrow echelon form, also known as row canonical form, if the following conditions are satisfied:
 All rows with only zero entries are at the bottom of the matrix
 The first nonzero entry in a row, called the leading entry or the pivot, of each nonzero row is to the right of the leading entry of the row above it.
 The leading entry, also known as the pivot, in any nonzero row is 1.
 All other entries in the column containing a leading 1 are zeroes.
For example
\[A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 3 \\ 0 & 0 & 0 \end{pmatrix}, B = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}, C = \begin{pmatrix} 0 & 7 & 3 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, D = \begin{pmatrix} 1 & 7 & 3 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \]
Matrices A and B are in reducedrow echelon form, but matrices C and D are not. C is not in reducedrow echelon form because it violates conditions two and three. D is not in reducedrow echelon form because it violates condition four. In addition, the elementary row operations can be used to reduce matrix D into matrix B.
Steps for GaussJordan Elimination
To perform GaussJordan Elimination:
 Swap the rows so that all rows with all zero entries are on the bottom
 Swap the rows so that the row with the largest, leftmost nonzero entry is on top.
 Multiply the top row by a scalar so that top row's leading entry becomes 1.
 Add/subtract multiples of the top row to the other rows so that all other entries in the column containing the top row's leading entry are all zero.
 Repeat steps 24 for the next leftmost nonzero entry until all the leading entries are 1.
 Swap the rows so that the leading entry of each nonzero row is to the right of the leading entry of the row above it.
Selected video examples are shown below:
 GaussJordan Elimination  Jonathan Mitchell (YouTube)
 Using GaussJordan to Solve a System of Three Linear Equations  Example 1  patrickJMT (YouTube)
 Algebra  Matrices  Gauss Jordan Method Part 1 Augmented Matrix  IntuitiveMath (YouTube)
 Gaussian Elimination  patrickJMT (YouTube)
To obtain the inverse of a n × n matrix A :
 Create the partitioned matrix \(( A  I )\) , where I is the identity matrix.
 Perform GaussJordan Elimination on the partitioned matrix with the objective of converting the first part of the matrix to reducedrow echelon form.
 If done correctly, the resulting partitioned matrix will take the form \(( I  A^{1} )\)
 Doublecheck your work by making sure that \(AA^{1} = I\).
M.8 Eigendecomposition
M.8 Eigendecomposition Eigenvector of a matrix
An eigenvector of a matrix A is a vector whose product when multiplied by the matrix is a scalar multiple of itself. The corresponding multiplier is often denoted as \(lambda\) and referred to as an eigenvalue. In other words, if A is a matrix, v is a eigenvector of A, and \(\lambda\) is the corresponding eigenvalue, then \(Av = \lambda v\).
For Example
\[ A= \begin{pmatrix} 4 & 0 & 1 \\ 2 & 2 & 3 \\ 7 & 5 & 0 \end{pmatrix} \]
\[ v = \begin{pmatrix} 1 \\ 1 \\ 2 \end{pmatrix} \]
\[ Av = \begin{pmatrix} 4 & 0 & 1 \\ 2 & 2 & 3 \\ 5 & 7 & 0 \end{pmatrix} \begin{pmatrix} 1 \\ 1 \\ 2 \end{pmatrix} = \begin{pmatrix} 4*1 + 0*1 + 1*2 \\ 2*1+ 2*1+ 3*2 \\ 5*1+ 7*1+ 0*2 \end{pmatrix} = \begin{pmatrix} 6 \\ 6 \\ 12 \end{pmatrix} = 6v \]
In the above example, v is an eigenvector of A, and the corresponding eigenvalue is 6. To find the eigenvalues/vectors of a n × n square matrix, solve the characteristic equation of a matrix for the eigenvalues. This equation is
\[ det(A  \lambda I ) = 0\]
Where A is the matrix, \(\lambda\) is the eigenvalue, and I is an n × n identity matrix. For example, take
\[ A= \begin{pmatrix} 4 & 3 \\ 2 & 1 \end{pmatrix}\]
The characteristic equation of A is listed below.
\[ det(A  \lambda I ) = det( \begin{pmatrix} 4 & 3 \\ 2 & 1 \end{pmatrix}  \lambda \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} ) = det \begin{pmatrix} 4  \lambda & 3 \\ 2 & 1  \lambda \end{pmatrix} = 0 \]
\[ det(A  \lambda I ) = (4  \lambda)(1  \lambda)  3*2 = \lambda^2  3 \lambda  10 = (\lambda + 2)(\lambda  5) = 0 \]
Therefore, one finds that the eigenvalues of A must be 2 and 5. Once the eigenvalues are found, one can then find the corresponding eigenvectors from the definition of an eigenvector. For \(\lambda = 5\), simply set up the equation as below, where the unknown eigenvector is \(v = (v_1, v_2)'\).
\[\begin{pmatrix} 4 & 3 \\ 2 & 1 \end{pmatrix} * \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} = 5 \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} \]
\[\begin{pmatrix} 4 v_1 + 3 v_2 \\ 2 v_1  1 v_2 \end{pmatrix} = \begin{pmatrix} 5 v_1 \\ 5 v_2 \end{pmatrix} \]
And then solve the resulting system of linear equations to get
\[ v = \begin{pmatrix} 3 \\ 1 \end{pmatrix} \]
For \(\lambda = 2\), simply set up the equation as below, where the unknown eigenvector is \(w = (w_1, w_2)\).
\[\begin{pmatrix} 4 & 3 \\ 2 & 1 \end{pmatrix} * \begin{pmatrix} w_1 \\ w_2 \end{pmatrix} = 2 \begin{pmatrix} w_1 \\ w_2 \end{pmatrix} \]
\[\begin{pmatrix} 4 w_1 + 3 w_2 \\ 2 w_1  1 w_2 \end{pmatrix} = \begin{pmatrix} 2 w_1 \\ 2 w_2 \end{pmatrix} \]
And then solve the resulting system of linear equations to get
\[ w = \begin{pmatrix} 1 \\ 2 \end{pmatrix} \]
M.9 SelfAssess
M.9 SelfAssessHere's your chance to assess what you remember from the matrix review.
 Review the concepts and methods on the pages in this section. Note the courses that certain sections are aligned with as prerequisites:
STAT 414 STAT 501 STAT 504 STAT 505 M.1  Matrix Definitions Required Required Required Required M.2  Matrix Arithmetic Required Required Required Required M.3  Matrix Properties Required Required Required Required M.4  Matrix Inverse Required Required Required Required M.5  Advanced Topics Recommended Recommended Recommended 5.1, 5.4, Required
5.2, 5.3, Recommended  Download and Complete the SelfAssessment Exam.
 Determine your Score by Reviewing the SelfAssessment Exam Solutions.
Students that score below 70% (fewer than 21 questions correct) should consider further review of these materials and are strongly encouraged to take MATH 220 (2 credits) or an equivalent course.
If you have struggled with the concepts and methods that are presented here, you will indeed struggle in the courses above that expect this foundation.
Please Note: These materials are NOT intended to be a complete treatment of the ideas and methods used in Matrix algebra. These materials and the accompanying selfassessment are simply intended as simply an 'early warning signal' for students. Also, please note that completing the selfassessment successfully does not automatically ensure success in any of the courses that use this foundation.