Example of Gauss Elimination, and Matrix Rank/Inverse
Contents
Example of Gauss Elimination, and Matrix Rank/Inverse#
Chemical Engineering example solved with Gauss elimination#
Three tanks of water are attached in series. All tanks have the same cross-sectional area \(A\). The flow rate through the valves is a function of the height of the water in the tanks (equations below)
Mass balances#
Balance on tank 1 at steady state (accum=0)
To prepare for the augmented matrix, we’ll keep unknowns on the left hand side, and constants on the right hand side!
Balance on tank 2 at steady state (accum=0)
Balance on tank 3 at steady state (accum=0)
Convert to math problem equation#
Solve system of equations#
For the situation where \(A=5\) m\(^2\), \(F_0=5\) m\(^3\)/hr, \(R_1=2\) hr/m\(^2\), \(R_2=1\) hr/m\(^2\), \(R_3=1\) hr/m\(^2\), plug into matrix:
Form \([\arr{A}|\vec{b}]\)
\(R_2=R_2-R_1\)
\(R_2=-R_2/2\)
\(R_3=R_2-R_3\)
\(R_2=R_2+R_3\)
\(R_1=R_1+R_2\)
Math solved!
Solve engineering problem#
\(h_1=20\). What units? Remember \(h\sim FR=(m^3/hr)(hr/m^2)=m\)
\(h_1=20\) m, \(h_2=10\) m, \(h_3=5\) m.
Can this solution make physical sense? Yes!
Rank of a matrix#
The rank of a matrix is the # of non-zero rows after gauss elimination
This tells us about the linear independent of the our system
Example#
After Gauss elimination
*There are two nonzero rows, so rank\((\arr{A})=2\).
This also means that only two of these equations are linearly independent!
The reduced echelon matrix tells us that \(x_1+x_3\) and \(x_2+2x_3\) form a basis of the system!
Definition of a basis: I can form any row in the original equation as a linear combination of the basis vectors.
[-3,1,-1] can be formed from \(c_1[1,0,1]+c_2[0,1,2]\)
Example 2#
Let’s say we get an augment matrix after gauss elimination of:
rank\((\arr{\tilde{A}})=3\)
rank\((\arr{A})=2\)
No solution!!
For an m=n system like the 3x3 system here:
If rank\((\arr{A})=\)rank\((\arr{\tilde{A}})=n\), one unique solution!
If rank\((\arr{A})<\)rank\((\arr{\tilde{A}})=n\), no solution!
Inner (dot) products of vectors#
You’ve probably done dot products in your calculus class, but let’s go ahead and review dot products.
This is a special case of matrix multiplication that occurs very often
If \(\vec{a}\) and \(\vec{b}\) are vectors, both with the same number of elements, then matrix multiplication yields a scalar (a 1x1 matrix) called the inner or dot product
Example#
Matrix multiplication is just a bunch of dot products. For example, in \(\arr{C}=\arr{A}\arr{B}\), every operation is just a dot product of a row and columnn vector
Diagonal Matrices#
Definition: Square matrices that have non-zero entries only on the diagonal. Any entry above of below the diagonal is zero
Example:
Identity (unit) matrix#
A square matrix \(\arr{I}\) where every element in the diagonal is 1, and all other elements are 0.
We can denote the size with \(\arr{I}_n\)
Special property! Multiplying a matrix \(\arr{A}\) by the same size identity matrix \(\arr{I}\) yields back \(\arr{A}\)!
Scalar matrix#
All non-zero entries of a diagonal matrix \(\arr{S}\) are equal to some number \(c\). In other words, \(\arr{S}=c\arr{I}\)
From the unit identity info above, \(\arr{S}\arr{A}=c\arr{I}\arr{A}=\arr{A}\arr{S}=\arr{A}c\arr{I}=c\arr{A}\)
Matrix inversion#
A real number \(a\) has a multiplicative inverse if there exists a \(b\) such that \(ab=1\)
Any non-zero \(a\) has a multiplicative inverse \(b=1/a\).
We can generalize this idea to matrices!
A square matrix \(\arr{A}\) is invertible or non-singular if there is a matrix \(\arr{B}\) such that
Ex: \(\arr{A}=\begin{bmatrix}2&4\\3&1\end{bmatrix}\) and \(\arr{B}=\begin{bmatrix}-1/10&2/5\\3/10&-1/5\end{bmatrix}\) are inverses of each other because \(\arr{A}\arr{B}=\arr{B}\arr{A}=\arr{I}\)
We say \(\arr{B}\) is the multiplicative inverse of \(\arr{A}\)
If \(\arr{B}\) and \(\arr{C}\) are both inverses of \(\arr{A}\), then
The inverse of \(\arr{A}\) is denoted \(\arr{A}^{-1}\)
A square matrix is called singular if it doesn’t have an inverse
Computing an inverse#
We already have the tools to compute the inverse!!
For a square matrix \(\arr{A}\), we will augment it with the identity matrix
We we will use Gauss elimination to convert \(\arr{A}\) to the identity matrix
The augmented matrix is then the inverse!
Magic! (not really)
Example#
Find \(\arr{A}^{-1}\) for \(\arr{A}=\begin{bmatrix}2&-6\\4&-2\end{bmatrix}\)
Form augmented matrix
\(R_1=R_1/2\)
\(R_2=R_2-4R_1\)
\(R_2=R_2/10\)
\(R_1=R_1+3R_2\)
\(\arr{A}^{-1}=\begin{bmatrix}-1/10&3/10\\ -1/5&1/10 \end{bmatrix}\)
Quick check!
Let’s do this one with python!
import numpy as np
#Define A and the inverse of A
A = np.array([[2,-6],[4,-2]])
Ainv = np.array([[-1/10,3/10],[-1/5,1/10]])
#Note that this looks very good, but some residual rounding errors in the
# computation has one element VERY close to zero but not quite right. For
# practical purposes, anything less than about 10^-8 is probably the same
# thing as zero in numerical methods
print(A@Ainv)
#Check Ainv*A
print(Ainv@A)
[[ 1.00000000e+00 -5.55111512e-17]
[ 0.00000000e+00 1.00000000e+00]]
[[1.00000000e+00 1.11022302e-16]
[0.00000000e+00 1.00000000e+00]]
Rank, inverses, and row echelon form, and solving systems of linear equations in numpy#
Rank of a matrix#
The np.linalg.matrix_rank
function calculates the matrix rank of A. For the example above:
A = np.array([[-3,1,-1],[1,0,1],[-2,2,2]])
print(A)
# We can use np.linalg.matrix_rank(A)
print('The rank of A is %d'%np.linalg.matrix_rank(A))
[[-3 1 -1]
[ 1 0 1]
[-2 2 2]]
The rank of A is 2
Let’s adjust the matrix and see if the rank changes (\(a_{22}=2\) to \(a_{22}=3\))
A = np.array([[-3,1,-1],[1,0,1],[-2,2,3]])
# Calculate the matrix rank of A
print('The rank of A is %d'%np.linalg.matrix_rank(A))
The rank of A is 3
Confusingly, np.rank(A) is not the matrix rank of A.
Inverse of a matrix#
Inverses are also easy with np.linalg.inv
A = np.array([[2,-6],[4,-2]])
#Same as the answer we got above!
print(np.linalg.inv(A))
[[-0.1 0.3]
[-0.2 0.1]]
We can ask numpy what the inverse of a singular matrix is, and it will give us an answer, but that answer is garbage.
#Try to get the inverse of a singular matrix
A = np.array([[-3,1,-1],[1,0,1],[-2,2,2]])
#Same as the answer we got above!
Ainv = np.linalg.inv(A)
print(Ainv)
#Check; notice that Ainv*A is not the identity! This is a problem with numerical
# methods; we have to be on top of our game all the time!
Ainv@A
[[-2.25179981e+15 -4.50359963e+15 1.12589991e+15]
[-4.50359963e+15 -9.00719925e+15 2.25179981e+15]
[ 2.25179981e+15 4.50359963e+15 -1.12589991e+15]]
array([[ 1. , 0. , 1. ],
[-2. , 1. , 1. ],
[ 0.5 , 0.25, 0.5 ]])
Gauss elimination#
There’s not a good method in numpy/scipy to do Gauss elimination with numerical methods. You can do it you use symbolic math packages (sort of like mathematica, but in python), but these largely defeat the point of using numerical methods!
The closest thing you can do is get a matrix into upper diagonal form, which is echelon form, but this only works for square matrics and isn’t really helpful (see https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.lu_factor.html)
We use Gauss elimination to solve linear systems and calculate inverses, and there are better ways to do these.
You can use other packages like sympy (symbolic python) which is a little closer to how a software package like mathematica works. It’s helpful if you really want RREF, but not how I would actually solve a large system of equations in real life. sympy RREF docs
import sympy as sp
# Example from earlier in lecture
A = [[-3, 1, -1],
[1,0,1],
[-2,2,2]]
# Make a sympy matrix
m = sp.Matrix(A)
m_rref, pivots = m.rref() # Compute reduced row echelon form
#notice the pretty latex printing!
m_rref
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In [6], line 1
----> 1 import sympy as sp
3 # Example from earlier in lecture
4 A = [[-3, 1, -1],
5 [1,0,1],
6 [-2,2,2]]
ModuleNotFoundError: No module named 'sympy'
Solving a system of linear equations#
There are excellent methods to solve a system of linear equations (https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html) that is determined (square) and full rank
solve takes in the matrix \(\arr{A}\) and the vector \(\vec{b}\) in \(\arr{A}\vec{x}=\vec{b}\) and returns \(\vec{x}\)
Example for determined system#
Let’s try the example from above for the three-tank system
A = np.array([[1,-1,0],
[1,-3,2],
[0,1,-2]])
b = np.array([10,0,0])
#Same answer as above!
print(np.linalg.solve(A,b))
[20. 10. 5.]
This works really well for a determined system.
Example for an underdetermined system#
Let’s do one of the examples from lecture 3!
A = np.array([[1,2,1],
[2,-1,1],
[4,3,3],
[3,1,2]])
b = np.array([1,2,4,3])
#Same answer!
print(np.linalg.solve(A,b))
---------------------------------------------------------------------------
LinAlgError Traceback (most recent call last)
<ipython-input-9-fb72b6805d72> in <module>()
7
8 #Same answer!
----> 9 print(np.linalg.solve(A,b))
<__array_function__ internals> in solve(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/numpy/linalg/linalg.py in solve(a, b)
379 a, _ = _makearray(a)
380 _assert_stacked_2d(a)
--> 381 _assert_stacked_square(a)
382 b, wrap = _makearray(b)
383 t, result_t = _commonType(a, b)
/usr/local/lib/python3.7/dist-packages/numpy/linalg/linalg.py in _assert_stacked_square(*arrays)
202 m, n = a.shape[-2:]
203 if m != n:
--> 204 raise LinAlgError('Last 2 dimensions of the array must be square')
205
206 def _assert_finite(*arrays):
LinAlgError: Last 2 dimensions of the array must be square
Notice that np.linalg.solve can’t handle this case. We can still get a solution,with np.linalg.lstsq, which tries to find a vector that satisfies this.
#Try least squares solution to this problem
x, res, rank, s = np.linalg.lstsq(A,b)
#First, print the rank of the matrix:
print(rank)
#print the solution
print(x)
# print the residual
print(res)
#Check that this is a solution
print(A@x)
3
[ 1.00000000e+00 -3.61767652e-16 5.06142841e-17]
[4.46770451e-31]
[1. 2. 4. 3.]
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:2: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
So, np.linalg.lstsq does give us a solution but not all of the solutions. There are other solutions that also would have worked as we saw. The simplest is [1,0,0]
A@np.array([1,0,0])
array([1, 2, 4, 3])
Once again, numerical methods can work well, but we have to be extra sharp to make sure we understand what they are doing. It’s on you if you use them and they don’t do quite what you were hoping for!