# Matrix Fun

$\[\begin{pmatrix}2 & 1\\ 1 & 2 \end{pmatrix}\]$

$\left[ \begin{array}{cccc} 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16 \end{array} \right]$

$\left[ \begin{array}{cccc} 1 & 0 & -3 & 5 \\ 0 & 1 & 2 & 4 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{array} \right]$

# 1/14/08

Identity Matrix (pg 22)

Rotation Matrices (pg 23)

# 1/23/08

Elementary row operations yield equivilent augmented matrices of a system of linear equations.

## Goal

Transform an augmented matrix into reduced row echelon form so as to easily see the solutions.

## Row Echelon Form

- Each non-zero row is above every zero row
- Leading entry of a non-zero row lies in a column to the right of that of the leading entry of any preceding row
- If a column contains the leading entry of a row, then all entries below it in that column are 0. (follows from #2)

## Reduced Row Echelon Form

- If a column contains the leading entry of a row, then all other entries of that column (below AND above) are 0
- Leading entry of each non-zero row is 1

Express leading variables x_{1},x_{3},x_{5} (basic variables) in terms of the others (free variables).

Basic variables are columns containing only a leading entry

# 1/25/08

Every matrix can be transformed into 1 and only 1 matrix in reduced row echelon form by a sequence of elementary row operations

If we have Ax=b

1) Write augmented matrix [A b]

2) Find reduced RE

[Rc] of [Ab]

3) If [Rc] contains a row in which the only non-zero entry lies in the last column, then there's no solution (inconsistent).

Otherwise, there's at least 1 solution.

Then we write the equations corresponding to [Rc] and solve to obtain a general solution of Ax=b.

## Gaussian Elimination

Procedure for obtaining the reduced row echelon form of a matrix

$\[\begin{pmatrix}0 & 0 & 2 & 4\\ 0 & 1 & -1 & 1 \\ 0 & 6 & 0 & -6 \end{pmatrix}\]$

1) Determine leftmost non-zero column —> Pivot column

Top position in a pivot column is a pivot position

2) In a pivot column interchange rows (if necessary) to have a non-zero entry in the pivot position.

r_{1}<—>r_{2}

$\[\begin{pmatrix}0 & 1 & -1 & 1 \\ 0 & 0 & 2 & 4 \\ 0 & 6 & 0 & -6 \end{pmatrix}\]$

3) Zero out all entries below pivot position.

-6r_{1} + r_{3}—>r_{3}

$\bf{A} = \[\begin{pmatrix}0 & 1 & -1 & 1 \\ 0 & 0 & 2 & 4 \\ 0 & 0 & 6 & -12\end{pmatrix}\]$

4) Repeat procedure 1-3 with next pivot column and pivot position.

2 (3rd col, 2nd row) becomes the new pivot position.

-3r_{2}+r_{3}—>r_{3}

$\bf{A} = \[\begin{pmatrix}0 & 1 & -1 & 1 \\ 0 & 0 & 2 & 4 \\ 0 & 0 & 0 & 0\end{pmatrix}\]$

# 1/28/08

Nothing interesting. Sick. =(

# 1/30/08

- Leading variables are basic variables. (x
_{1},x_{3},x_{4},)

Others are free variables (x_{2},x_{5}) Can be anything (0, 1, or infinite solutions)

- The rank of an m*n matrix A is the number of non-zero rows in the reduced row echelon form of A.

- Nullity of A is n-rank A. For the previous example rank=3; nullity=6(columns)-3=3. 6 is the number of columns.

- Equivalently: Rank equals the number of pivot columns in the close messagematrix. Nullity equals the number of non-pivot columns.

- If an m*n matrix has rank n, then it must be [e
_{1}… e_{n}]_{m*n}

If it is square(m*n), then [e_{1} ..e_{n}]_{m*n} == In, identity matrix

Each basic variable corresponds to the leading entry of the augmented matrix Ax=b.

# of basic variables = # of non-zero rows == rank A.

# of free variables of Ax=b is n - # basic variables = n-rank A == nullity of A

- Consistency for Ax=b

- # of basic variables = rank A.
- # of free variables = nullity of A.

=> Unique solution **iff (if and only if)** the nullity of A is 0.

Infinity solutions **iff** nulity (free variables) of A is positive.

# 02/01/08

## Consistency for Ax=b

- # of basic variables = rank A
- # of free variables = nullity of A (not [Ab])

## Span of a set of vectors

For the non-empty set S={u_{1} …. u_{k}} of vectors in R^{n}, the **span** of S (Span S) is

the set of all linear combinations of u_{1} … u_{k} in R^{n}.

Span S = Span {u_{1} … u_{k}}

Span{0}={0}

# 02/04/08

Test on Wed up to and including 1.4

Span S = set of all linear combinations of S={u_{1} … u_{k}}

Span{0}={0}

S_{1}={$\bf{A} = \[\begin{pmatrix}1 \\ -1 \end{pmatrix}\]$}

Span S_{1} = $\{a (scalar) * \[\begin{pmatrix}1 \\ -1 \end{pmatrix}\] \}$

Span_{3}= $\{a\[\begin{pmatrix}1 \\ -1 \end{pmatrix}\] + b \[\begin{pmatrix} -2 \\ 2 \end{pmatrix}\] + c \[\begin{pmatrix} 2 \\ 1 \end{pmatrix}\] \}$

S_{3}={u_{1},u_{2},u_{3}}={u_{1},-2u_{1},u_{3}}=R^{2}

A vector V belongs to the span of S (S={u_{1} … u_{k}}) if V can be expressed as the linear combination of the vectors u_{1} … u_{k} in S.

Suppose A=[u_{1} … u_{k}] in R^{n}

Then, V is in the span S iff (if and only if) Ax=v is consistent.

x_{1}u_{1} … = V

# 02/08/08

V "belongs to" Span S = {u_{1} … u_{k}}

V is a linear combination of the vectors u_{1} … u_{k}.

- If A = [u
_{1}… u_{k}] then v is in the Span S if + only if Ax=v is consistent.

x_{1}u_{1} + x_{k}u_{k} = v

Is V= $\[\begin{pmatrix}3 \\ 0 \\ -5 \\ 1 \end{pmatrix}\]$

# 02/13/08

If one of the u's is 0 then the set is always linear dependant.

The set {u_{1} .. u_{k}} is linear dependant iff there exists a non-zero solution of _Ax=0__

A=$\[\begin{pmatrix}1 & 1 & 1 & 1 \\ 2 & 0 & 4 & 2 \\ 1 & 1 & 1 & 3\end{pmatrix}\]$

For an m x n matrix A the following are equivilent:

- Columns of A are linear independant
- Ax=b has at most 1 solution for each b in R
^{m}. - Nullity of A is 0 (no free vars)
- Rank=#columns - Nullity
- Columns of RRE of A are distinct standard vectors in R
^{m} - Only solution of Ax=0 is x=0
- Pivot position in each column of A.

For linear independant/dependant we want to determine whether or not x=0 is the only solution to Ax=0 (homogeneous equation)

If there exist free variables => infinity solutions, linear dependant

*Vectors u_{1} … u_{k} in R^{n} are linear dependant iff u_{i}=0 or there exists i>=2 such that u_{i} is a linear combination of the preceding vectors u_{1} .. u_{i-1}

# 3/10/08

For a linear transf T, w is the range of T iff w (output) = T(v[input])

the range of a linear transformation equals thespan of the columns of its standard matrix A = [T(e_{1}) T …]

A function f R^{n} —> R^{m} is onto if its range is all of R^{m} (range = co-domain)

if every vector in R^{m} is an image under f