M.5 Advanced Matrix Properties

M.5 Advanced Matrix Properties
Orthogonal Vectors

Two vectors, x and y, are orthogonal if their dot product is zero.

For example

\[ e \cdot f = \begin{pmatrix} 2 & 5 & 4 \end{pmatrix} * \begin{pmatrix} 4 \\ -2 \\ 5 \end{pmatrix} = 2*4 + (5)*(-2) + 4*5 = 8-10+20 = 18\]

Vectors e and f are not orthogonal.

\[ g \cdot h = \begin{pmatrix} 2 & 3 & -2 \end{pmatrix} * \begin{pmatrix} 4 \\ -2 \\ 1 \end{pmatrix} = 2*4 + (3)*(-2) + (-2)*1 = 8-6-2 = 0\]

However, vectors g and h are orthogonal. Orthogonal can be thought of as an expansion of perpendicular for higher dimensions. Let \(x_1, x_2, \ldots , x_n\) be m-dimensional vectors. Then a linear combination of \(x_1, x_2, \ldots , x_n\) is any m-dimensional vector that can be expressed as

\[ c_1 x_1 + c_2 x_2 + \ldots + c_n x_n \]

where \(c_1, \ldots, c_n\) are all scalars. For example:

\[x_1 =\begin{pmatrix}
  3  \\
  8 \\
  -2
\end{pmatrix},
x_2 =\begin{pmatrix}
  4  \\
  -2 \\
  3
\end{pmatrix}\]

\[y =\begin{pmatrix}
  -5 \\
  12 \\
  -8
\end{pmatrix} = 1*\begin{pmatrix}
  3  \\
  8 \\
  -2
\end{pmatrix} + (-2)* \begin{pmatrix}
  4  \\
  -2 \\
  3
\end{pmatrix} = 1*x_1 + (-2)*x_2\]

So y is a linear combination of \(x_1\) and \(x_2\). The set of all linear combinations of \(x_1, x_2, \ldots , x_n\) is called the span of \(x_1, x_2, \ldots , x_n\). In other words

\[ span(\{x_1, x_2, \ldots , x_n \} ) = \{ v| v= \sum_{i = 1}^{n} c_i x_i , c_i \in \mathbb{R} \} \]

A set of vectors \(x_1, x_2, \ldots , x_n\) is linearly independent if none of the vectors in the set can be expressed as a linear combination of the other vectors. Another way to think of this is a set of vectors \(x_1, x_2, \ldots , x_n\) are linearly independent if the only solution to the below equation is to have \(c_1 = c_2 = \ldots = c_n = 0\), where \(c_1 , c_2 , \ldots , c_n \) are scalars, and \(0\) is the zero vector (the vector where every entry is 0).

\[ c_1 x_1 + c_2 x_2 + \ldots + c_n x_n = 0 \]

If a set of vectors is not linearly independent, then they are called linearly dependent.

Example M.5.1

\[ x_1 =\begin{pmatrix} 3 \\ 4 \\ -2 \end{pmatrix}, x_2 =\begin{pmatrix} 4 \\ -2 \\ 2 \end{pmatrix}, x_3 =\begin{pmatrix} 6 \\ 8 \\ -2 \end{pmatrix} \]

Does there exist a vector c, such that,

\[ c_1 x_1 + c_2 x_2 + c_3 x_3 = 0 \]

To answer the question above, let:

\begin{align} 3c_1 + 4c_2 +6c_3 &= 0,\\ 4c_1 -2c_2 + 8c_3 &= 0,\\ -2c_1 + 2c_2 -2c_3 &= 0 \end{align}

Solving the above system of equations shows that the only possible solution is \(c_1 = c_2 = c_3 = 0\). Thus \(\{ x_1 , x_2 , x_3 \}\) is linearly independent. One way to solve the system of equations is shown below. First, subtract (4/3) times the 1st equation from the 2nd equation.

\[-\frac{4}{3}(3c_1 + 4c_2 +6c_3) + (4c_1 -2c_2 + 8c_3) = -\frac{22}{3}c_2 = -\frac{4}{3}0 + 0 = 0 \Rightarrow c_2 = 0 \]

Then add the 1st and 3 times the 3rd equations together, and substitute in \(c_2 = 0\).

\[ (3c_1 + 4c_2 +6c_3) + 3*(-2c_1 + 2c_2 -2c_3) = -3c_1 + 10 c_2 = -3c_1 + 10*0 = 0 + 3*0 = 0 \Rightarrow c_1 = 0 \]

Now, substituting both \(c_1 = 0\) and \(c_2 = 0\) into equation 2 gives.

\[ 4c_1 -2c_2 + 8c_3 = 4*0 -2*0 + 8c_3 = 0 \Rightarrow c_3 = 0 \]

So \(c_1 = c_2 = c_3 = 0\), and \(\{ x_1 , x_2 , x_3 \}\) are linearly independent.

Example M.5.2

\[ x_1 =\begin{pmatrix} 1 \\ -8 \\ 8 \end{pmatrix}, x_2 =\begin{pmatrix} 4 \\ -2 \\ 2 \end{pmatrix}, x_3 =\begin{pmatrix} 1 \\ 3 \\ -2 \end{pmatrix} \]

In this case \(\{ x_1 , x_2 , x_3 \}\)are  linearly dependent, because if \(c = (-1, 1, -2)\), then

\[c^T X = \begin{pmatrix}
  -1  \\
  1\\
  -2
\end{pmatrix}  \begin{pmatrix}
x_1 & x_2 & x_3
\end{pmatrix} = -1 \begin{pmatrix}
  1  \\
  -8\\
  8
\end{pmatrix}+ 1
\begin{pmatrix}
  4  \\
  -2\\
  2
\end{pmatrix} - 2 \begin{pmatrix}
  1  \\
  3 \\
  -2
\end{pmatrix} =
 \begin{pmatrix}
  -1*1 +1*4-2*1  \\
  -1*-8+1*-2-2*3 \\
  -1*8+1*2-2*-2
\end{pmatrix}=
 \begin{pmatrix}
  0  \\
  0 \\
 0
\end{pmatrix}
\]

Norm of a vector or matrix

 The norm of a vector or matrix is a measure of the "length" of said vector or matrix. For a vector x, the most common norm is the \(\mathbf{L_2}\) norm, or Euclidean norm. It is defined as

\[ \| x \| = \| x \|_2 = \sqrt{ \sum_{i=1}^{n} x_i^2 } \]

Other common vector norms include the \(\mathbf{L_1}\) norm, also called the Manhattan norm and Taxicab norm.

\[ \| x \|_1 = \sum_{i=1}^{n} |x_i| \]

Other common vector norms include the Maximum norm, also called the Infinity norm.

\[ \| x \|_\infty = max( |x_1| ,|x_2|, \ldots ,|x_n|) \]

The most commonly used matrix norm is the Frobenius norm. For a m × n matrix A, the Frobenius norm is defined as:

\[ \| A \| = \| A \|_F = \sqrt{ \sum_{i=1}^{m} \sum_{j=1}^{n} x_{i,j}^2 } \]


Quadratic Form of a Vector

The quadratic form of the vector x associated with matrix A is

\[ x^T A x = \sum_{i = 1}^{m} \sum_{j=1}^{n} a_{i,j} x_i x_j \]

A matrix A is Positive Definite if for any non-zero vector x, the quadratic form of x and A is strictly positive. In other words, \(x^T A x > 0\) for all nonzero x.

A matrix A is Positive Semi-Definite or Non-negative Definite if for any non-zero vector x, the quadratic form of x and A is non-negative . In other words, \(x^T A x \geq 0\) for all non-zero x. Similarly,

A matrix A is Negative Definite if for any non-zero vector x, \(x^T A x < 0\). A matrix A is Negative Semi-Definite or Non-positive Definite if for any non-zero vector x, \(x^T A x \leq 0\).


Legend
[1]Link
Has Tooltip/Popover
 Toggleable Visibility