A.2. Inverse Matrices - JulTob/Mathematics GitHub Wiki
The Tale of Inverse Matrices: Unlocking the Mystery of Reversibility 🎬
In the vast universe of mathematics, not everything moves forward without looking back. Sometimes, you need to reverse the course, to undo what’s been done. And in the world of matrices, that’s when Inverse Matrices step onto the stage.
The Conflict: A System in Distress 🌌
Our story begins with a system—a set of interrelated equations that seem to hold the fabric of a mathematical universe together. Each equation ties unknowns—$x$, $y$, $z$—into a web of connections, but something is off. The system is stuck, trapped by complexity, its variables tangled in knots that no one can untie.
\left\{
\begin{matrix}
a_{11}x + a_{12}y + a_{13}z = C_1 \\
a_{21}x + a_{22}y + a_{23}z = C_2 \\
a_{31}x + a_{32}y + a_{33}z = C_3
\end{matrix}
\right\}.
This matrix, this system of equations, holds the key to unlocking the values of $x$, $y$, and $z$. But how do we solve it? Direct approaches have failed. Enter the inverse matrix, the unsung hero with a unique gift: the power of reversal.
The Quest: In Search of Reversibility 🎒🗺️
The challenge now is to find the Inverse Matrix. This is no ordinary calculation. Only square matrices—those with equal numbers of rows and columns—have a shot at being reversed. Our hero must be chosen carefully:
A = \begin{pmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{pmatrix}
But there's a catch: Not all matrices can be reversed. To be worthy of an inverse, the matrix must possess something crucial: a non-zero determinant. It’s the matrix’s passport to inversion, and without it, there is no turning back.
Det(A) = a_{11}(a_{22}a_{33} - a_{23}a_{32}) - a_{12}(a_{21}a_{33} - a_{23}a_{31}) + a_{13}(a_{21}a_{32} - a_{22}a_{31})
If the determinant is zero, the matrix is trapped in a state of singularity, doomed to remain unsolved. But if the determinant is non-zero, the journey towards the inverse begins.
The Hero’s Power: Reversing the Matrix 🛡️⚔️
When the matrix’s determinant checks out, the real magic happens. The Inverse Matrix swoops in, with its one-of-a-kind ability to reverse the system. The equations that once seemed impossible to untangle can now be systematically unraveled.
The Inverse Matrix $A^{-1}$ is the key to solving the system, found using a carefully orchestrated dance of cofactors and determinants. The calculations are intricate, but the reward is immense: the power to recover the unknowns.
For a 2x2 matrix, the inverse is found with a straightforward formula:
A^{-1} = \frac{1}{Det(A)} \cdot
\begin{pmatrix}
a_{22} & -a_{12} \\
-a_{21} & a_{11}
\end{pmatrix}
This formula allows us to flip the matrix on its head, turning a hopeless system into one that yields answers.
The Battle: Gaussian Elimination ⚔️🔥
For larger systems, the path to finding the inverse is more treacherous. This is where Gaussian elimination comes into play, a step-by-step process that reduces the matrix into its simplest form—like stripping away the unnecessary layers of a puzzle until only the core remains.
Step 1: Transform the matrix into its row echelon form:
By systematically eliminating elements, we create a simplified version of the matrix where each row starts with a leading 1, moving diagonally.
Step 2: Apply the same transformations to the identity matrix:
The identity matrix is a perfect matrix with 1's along its diagonal, standing ready to help us find the inverse. Each transformation applied to the original matrix is mirrored on the identity matrix.
Step 3: Reach the identity matrix:
Once the original matrix is reduced to the identity matrix, the transformed identity matrix now reveals the inverse matrix we’ve been seeking.
\begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix}
\quad \rightarrow \quad
\text{Inverse Matrix of A}
With the inverse matrix in hand, we can now solve the system with ease:
\vec{x} = A^{-1} \cdot \vec{C}
Resolution: Solving the System 🎉
The inverse matrix has done its job. What once seemed like an impossible knot of equations is now a neatly resolved solution, with each unknown falling into place.
Through the power of inversion, we’ve turned a problematic system into a solvable one. The inverse matrix, though often overlooked, is the true hero of this story.
The Legacy: Why Inverses Matter 🏆
In the grand scheme of things, inverse matrices are the key to unlocking a vast array of problems, from solving systems of equations to transforming entire spaces in fields like computer graphics and physics.
They offer a way to undo, to reverse, and to restore order in a chaotic system. And their importance goes beyond mere calculation—they represent the deep interconnectedness of mathematical structures, showing that for every action, there can be a reversal.
Inverse matrices aren’t just tools; they’re the guardians of balance, preserving the harmony in the mathematical universe.
Thus, the journey of the inverse matrix concludes, but its legacy lives on, ensuring that wherever a system gets entangled, there will always be a way to reverse the course and find a solution. 🛡️
Inverse Matrices
If two square matrices, $A_n$ and $B_n$, are such as
A_n·B_n = I_n
we say that they are $Inverse$ to each other.
CALCULATING INVERSE MATRICES
Formula for 2⨯2 Matrices
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{pmatrix} ^{-1}
=
\frac{
\begin{pmatrix}
a_{22} & -a_{12} \\
-a_{21} & a_{11}
\end{pmatrix}
}{a_{11}·a_{22}-a_{12}·a_{21}}
Gaussian Elimination
In addition to finding inverse matrices, Gaussian elimination can also be used to solve linear systems.
Example:
\left\{\begin{matrix}
3x_1 + 1x_2 = 1 \\
1x_1 + 2x_2 = 0
\end{matrix}\right.
\begin{pmatrix}
3 & 1 & | & 1 \\
1 & 2 & | & 0
\end{pmatrix}
\left\{\begin{matrix}
6x_1 + 2x_2 = 2 \\
1x_1 + 2x_2 = 0
\end{matrix}\right.
\begin{pmatrix}
6 & 2 & | & 2 \\
1 & 2 & | & 0
\end{pmatrix}
\left\{\begin{matrix}
5x_1 + 0x_2 = 2 \\
1x_1 + 2x_2 = 0
\end{matrix}\right.
\begin{pmatrix}
5 & {\color{#00e0f7}0} & | & 2 \\
1 & 2 & | & 0
\end{pmatrix}
\left\{\begin{matrix}
5x_1 + 0x_2 = 2 \\
5x_1 + 10x_2 = 0
\end{matrix}\right.
\begin{pmatrix}
{\color{#00cfff} 5} & 0 & | & 2 \\
{\color{#00cfff} 5} & 10 & | & 0
\end{pmatrix}
\color{#00cfff}
\left\{\begin{matrix}
5x_1 + 0x_2 = 2 \\
0x_1 + 10x_2 = -2
\end{matrix}\right.
\: \: \: | \: \: \:
\begin{pmatrix}
5 & 0 & | & 2 \\
{\color{#00bbff} 0} & 10 & | & -2
\end{pmatrix}
\color{#00bbff}
\left\{\begin{matrix}
1x_1 + 0x_2 = 2:5 \\
0x_1 + 1x_2 = -1:5
\end{matrix}\right.
\: \: \: | \: \: \:
\begin{pmatrix}
1 & 0 & | & 2:5 \\
0 & 1 & | & -1:5
\end{pmatrix}
Gaussian elimination is about trying to get the matrix form to approach the Identity Matrix.
If we apply the same set of transformations to the Identity Matrix we will find the Inverse of the original matrix.
Example:
\color{#73ccff}
\begin{pmatrix}
3 & 1 & | & 1 & 0 \\
1 & 2 & | & 0 & 1
\end{pmatrix}
\color{#86c1ff}
\begin{pmatrix}
6 & 2 & | & 2 & 0 \\
1 & 2 & | & 0 & 1
\end{pmatrix}
\color{#a4b3ff}
\begin{pmatrix}
5 & 0 & | & 2 & -1 \\
1 & 2 & | & 0 & 1
\end{pmatrix}
\color{#c5a2f6}
\begin{pmatrix}
5 & 0 & | & 2 & -1 \\
5 & 10 & | & 0 & 5
\end{pmatrix}
\color{#e290de}
\begin{pmatrix}
5 & 0 & | & 2 & -1 \\
0 & 10 & | & -2 & 6
\end{pmatrix}
\begin{pmatrix}
1 & 0 & | & 2:5 & -1:5 \\
0 & 1 & | & -1:5 & 3:5
\end{pmatrix}
So we have that for
\color{#ff7297}
A = \begin{pmatrix}
3 & 1 \\
1 & 2
\end{pmatrix}
\: \: then \: \:
A^{-1} = \begin{pmatrix}
2:5 & -1:5 \\
-1:5 & 3:5
\end{pmatrix}
Cofactor Method
Invertibility
A Matrix is only Invertible if the determinant is different to zero.
\color{#ff6e6e}
Det(
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{pmatrix}
) =
a_{11}·a_{22}-a_{12}·a_{21}
\color{#ff8164}
Det(
A
) = 0 ⟺ ∄A^{-1}