A.0 Basic Matrix Operations - JulTob/Mathematics GitHub Wiki

Introduction.

A matrix is a rectangular array of numbers arranged in rows and columns

Matrices are used to:

  • organize data and
  • to represent linear transformations in mathematics.

This page introduces the basic operations on matrices: addition, scalar multiplication, matrix multiplication, and the transpose, as well as the special identity and zero matrices. Each operation is defined with an example, and we outline fundamental properties (like associativity and distributivity) that will be useful in more advanced topics.

Matrix Addition

Two matrices can be added only if they have the same dimensions (same number of rows and columns). The sum $C = A + B$ is obtained by adding corresponding entries of $A$ and $B$. In symbols, if $A$ and $B$ are both $m\times n$ matrices, then each entry of $C$ is

C_{ij} = A_{ij} + B_{ij}
if size(A) = size(B) then
 for all i,j
   C := A+B is
     C(i,j) = A(i,j) + B(i,j) 
flowchart TD
    M1(A)
    M2(B)
     e1(("$$a_{ij}$$"))
     e2(("$$b_{ij}$$"))
     ef(("$$a_{ij} + b_{ij}$$"))
    Mf(A+B)
   M1 o-->|ij| e1
   M2 o-->|ij| e2 
   e1 --> ef
   e2 --> ef
   ef --o|ij| Mf
    

Let

$$ A = \begin{pmatrix} 2 & 0 & 5\ 1 & 6 & 8 \end{pmatrix} $$

and

$$ B = \begin{pmatrix} 3 & 5 & 7\ 1 & 0 & 2 \end{pmatrix} $$

Both are $2\times 3$ matrices. Their sum is obtained by adding entries at the same position:

$$ A+B = \begin{pmatrix} 3+2 & 5+0 & 7+5\ 1+1 & 0+6 & 2+8 \ \end{pmatrix} = \begin{pmatrix} 5 & 5 & 12\ 2 & 6 & 10 \ \end{pmatrix} $$

\begin{pmatrix}
1 & 2 \\
3 & 4 \\
5 & 6 
\end{pmatrix}

+ 

\begin{pmatrix}
1 & 4 \\
2 & 5 \\
3 & 6 
\end{pmatrix} 


=


\begin{pmatrix}
1+1 & 2+4 \\
3+2 & 4+5 \\
5+3 & 6+6 
\end{pmatrix} 

=

\begin{pmatrix}
2 & 6 \\
5 & 9 \\
8 & 12 
\end{pmatrix} 

Each entry of $A+B$ is the sum of the corresponding entries from $A$ and $B$, as shown above.

Properties of Addition: Matrix addition shares similar properties with ordinary addition of numbers:

  • Commutative:

    • $A + B = B + A$.
  • Associative:

    • $(A + B) + C = A + (B + C)$.
  • Additive Identity:

    • There is a zero matrix $0$ (all entries 0) such that $A + 0 = A$.
    • For example, the $2\times3$ zero matrix:

$$ 0_{2\times3} = \begin{pmatrix} 0&0&0 \ 0&0&0 \end{pmatrix}
$$

  • Additive Inverse: Negation

  • Every matrix $A$ has an inverse under addition, namely $-A$, such that $A + (-A) = 0$. (The matrix $-A$ is obtained by negating each entry of $A$.)

  • The negative of a Matrix $¬A = C$ is

C_{ij} = ¬A_{ij}

Subtraction

In general for the subtraction $A-B = A + (¬B) = C$ then

c_{ij} = a_{ij} - b_{ij} = a_{ij} + ¬b_{ij}
\begin{pmatrix}
1 & 2 \\
3 & 4 \\
5 & 6 
\end{pmatrix}

- 

\begin{pmatrix}
1 & 4 \\
2 & 5 \\
3 & 6 
\end{pmatrix} 


=


\begin{pmatrix}
1-1 & 2-4 \\
3-2 & 4-5 \\
5-3 & 6-6 
\end{pmatrix} 

=

\begin{pmatrix}
0 & -2 \\
1 & -1 \\
2 & 0 
\end{pmatrix} 
 

These properties mean that matrices (of the same size) form an abelian group under addition: you can reorder or regroup sums without changing the result, 0 acts like “zero,” and each matrix has an additive inverse.


Scalar Multiplication

Definition: A matrix can be multiplied by a scalar (a single number). If $k$ is a scalar and $A$ is a matrix, the scalar multiple $kA$ is computed by multiplying each entry of $A$ by $k$:

$$ kA_{ij} = k \cdot A_{ij} $$

The result $kA$ has the same dimensions as $A$.

C := α·A is
   for all i,j
     c(i,j) = α·A(i,j)

Example (Scalar multiplication): Let

$$ A = \begin{pmatrix}2 & 0 & 5\ 1 & 6 & 8\end{pmatrix}
$$

and let $k = 3.5$. Then

$$ kA = 3.5 \begin{pmatrix} 2 & 0 & 5\ 1 & 6 & 8 \end{pmatrix} =
\begin{pmatrix} 3.5 \cdot 2 & 3.5 \cdot 0 & 3.5 \cdot 5\ 3.5 \cdot 1 & 3.5 \cdot 6 & 3.5 \cdot 8 \end{pmatrix} = \begin{pmatrix} 7 & 0 & 17.5\ 3.5 & 21 & 28 \end{pmatrix} $$

as each entry of $A$ is multiplied by 3.5.

An Example:


10·
\begin{pmatrix}
1 & 2 \\
3 & 4 \\
5 & 6 
\end{pmatrix}

=


\begin{pmatrix}
10·1 & 10·2 \\
10·3 & 10·4 \\
10·5 & 10·6 
\end{pmatrix} 

=

\begin{pmatrix}
10 & 20 \\
30 & 40 \\
50 & 60 
\end{pmatrix} 

Properties of Scalar Multiplication: Multiplying matrices by scalars has properties analogous to real-number arithmetic:

  • Distributive (over matrix addition): $k(A + B) = kA + kB$.
  • Distributive (over scalar addition): $(k + \ell),A = kA + \ell A$, where $k,\ell$ are scalars.
  • Associative (with scalars): $k(\ell A) = (k,\ell),A$. (First multiplying $A$ by $\ell$, then by $k$, is the same as multiplying by the product $k\ell$ in one step.)
  • Scalar Identity: $1 \cdot A = A$. Multiplying by 1 leaves any matrix unchanged.

These ensure that scaling a matrix behaves consistently. For instance, $2(A+B)$ gives the same result as $2A + 2B$, and so on.

Properties .
Associative $A+B+C = (A+B)+C = A+(B+C)$
Commutative $A+B = B+A$
Neutral $A+𝟘 = A$
Negative $A+A^{\neg} = 𝟘$ ⇔ $A^{\neg} = -1·A$
Scalar-Distributive $α·(A+B)= α·A + α·B$
Matrix-Distributive $(α+β)·A= α·A + β·A$
Scalar-Asociative $α·β·A= (α·β)A= α·(β·A)$
Scalar Unit $1·A = A$

Matrix Multiplication

Matrix multiplication could have worked in many different ways ( $c_{rc} = a_{rc}·b_{rc}$ is a good example, the Hadamard product) but we choose these rules because they have a very useful meaning. Applied to matter in an algebraic space, that moves and stretches in any way, this specific way performs a transformation.

The product of two matrices $A$ and $B$ is defined only when the number of columns of $A$ equals the number of rows of $B$. If $A$ is an $m \times r$ matrix and $B$ is an $r \times n$ matrix, then their product $AB$ is an $m \times n$ matrix. (If the inner dimensions do not match, the product $AB$ is not defined.) Each entry of $AB$ is computed as a dot product of a row of $A$ with a column of $B$: for the entry in the $i$-th row and $j$-th column,

C := A·B is
   for all i,j
     c(i,j) = ∑ₙ A(i,n)·B(n,j)

$$ AB_{ij} = \sum_{k=1}^{r} A_{ik} B_{kj} $$

summing over the $r$ terms where the indices overlap. In other words, to get $(AB)_{ij}$, take the $i$-th row of $A$ and the $j$-th column of $B$, multiply them element-wise and add the results.

The number of columns of the first matrix equals the number of rows of the second. In this example, a $4\times 3$ matrix multiplies a $3\times 4$ matrix, producing a $4\times 4$ result. Each entry of the product is computed by taking a row from the first matrix and a column from the second and calculating their dot product.

Example (Matrix multiplication): Let

$$ A = \begin{pmatrix} 1 & 2 & 3\ 4 & 5 & 6 \end{pmatrix}· B = \begin{pmatrix} 1 & 2\ 3 & 4\ 5 & 6 \end{pmatrix}. $$

𝐀_{₁⨯₂} = 
\begin{bmatrix}
1 & 2
\end{bmatrix}
   
𝐁_{₂⨯₂} = 
\begin{bmatrix}
3 & 5 \\
7 & 11
\end{bmatrix}
𝐂_{₁⨯₂} = 𝐀_{₁⨯₂}·𝐁_{₂⨯₂} =
\begin{matrix}
  &   & 3         & 5 \\
  &   & 7         & 11 \\
1 & 2 & [1·3 + 2·7 & 5·1 + 11·2 ]
\end{matrix}
=
\begin{bmatrix}
17 & 27
\end{bmatrix}

Matrix $A$ is $2\times 3$ and $B$ is $3\times 2$, so the product $AB$ is defined (the inner dimensions 3 match) and will be a $2\times 2$ matrix. We compute each entry of $C = AB$:

  • $C_{11}$ is the dot product of the 1st row of $A$ and 1st column of $B$: $(1,2,3)\cdot(1,3,5) = 1\cdot 1 + 2\cdot 3 + 3\cdot 5 = 22$.
  • $C_{12}$ is the 1st row of $A$ with 2nd column of $B$: $(1,2,3)\cdot(2,4,6) = 1\cdot 2 + 2\cdot 4 + 3\cdot 6 = 28$.
  • $C_{21}$ is the 2nd row of $A$ with 1st column of $B$: $(4,5,6)\cdot(1,3,5) = 4 + 15 + 30 = 49$.
  • $C_{22}$ is the 2nd row of $A$ with 2nd column of $B$: $(4,5,6)\cdot(2,4,6) = 8 + 20 + 36 = 64$.

Thus,

$$ AB = C = \begin{pmatrix} 22 & 28\ 49 & 64 \end{pmatrix}. $$

Matrix multiplication is easier to work out with a systematic procedure, but this example shows how each entry is formed by multiplying and adding along a row and a column.

Properties of Matrix Multiplication: Matrix multiplication behaves differently from addition. Some key properties:

  • Associative:
    • $(AB)C = A(BC)$, whenever the products are defined. (You can group multiplications without ambiguity.)
  • Distributive:
    • $A(B + C) = AB + AC$ and $(A + B)C = AC + BC$, whenever those sums and products are defined. Matrix multiplication distributes over matrix addition from both the left and right sides.
  • Not Commutative:
    • In general, $AB \neq BA$. Even when both $AB$ and $BA$ are defined, their results can be different or one may not even be defined if dimensions don’t allow the swap. (Only in special cases do matrices commute under multiplication.)
  • Identity Element:
    • For square matrices, the identity matrix $I$ (see below) acts as a multiplicative identity: $AI = IA = A$ for any matrix $A$ of the same size. There is an identity matrix of size $n$ for each $n$, usually denoted $I_n$.
  1. Associativ
    • $ABC = (AB)C = A(BC)$
  2. Right-Side Neutral Element
    • $A·I = A$
  3. Left-Side Neutral Element
    • $I·A = A$
  4. Scalar associative
    • $𝛼(AB) = (𝛼A)B = A(𝛼B)$
  5. Right Side Distribution
    • $A(B+C) = AB + AC$
  6. Left Side Distribution
    • $(B+C)A = BA + CA$

Matrix multiplication is thus a binary operation that is associative and distributive (like ordinary multiplication), but not commutative in general. The lack of commutativity is an important difference from normal arithmetic.

Transpose of a Matrix

Definition: The transpose of a matrix $A$ is obtained by flipping $A$ over its diagonal, effectively swapping its rows and columns. The transpose is denoted $A^T$. Formally, the $(i,j)$ entry of $A^T$ equals the $(j,i)$ entry of $A$. If $A$ is an $m\times n$ matrix, then $A^T$ is an $n\times m$ matrix.

Example (Transpose): If

$$ A = \begin{pmatrix} 1 & 2 & 3\ 4 & 5 & 6 \end{pmatrix} $$

which is $2$ rows by $3$ columns, then

$$ A^T = \begin{pmatrix} 1 & 4\ 2 & 5\ 3 & 6 \end{pmatrix} $$

which is $3$ rows by $2$ columns. We obtained $A^T$ by taking each row of $A$ and writing it as a column. Equivalently, the first column of $A^T$ is the first row of $A$, the second column of $A^T$ is the second row of $A$, and so on.

Properties of the Transpose:

Taking transposes has useful properties:

  1. $(A^T)^T = A$. (Transposing twice returns the original matrix.)
  2. $(A+B)ᵀ=Aᵀ+Bᵀ$. (The transpose of a sum is the sum of transposes.)
  3. (αA)ᵀ=αAᵀ for any scalar $α$. (Scalar multiplication commutes with transposition.)
  4. $(A·B)ᵀ=Bᵀ·Aᵀ$. (The transpose of a product reverses the order of multiplication.)
  5. (∑Aₙ)ᵀ = ∑ Aₙᵀ
  6. (∏₁ⁱ Aₙ )ᵀ = ∏ᵢ¹ Aₙᵀ

These identities are routinely used in matrix algebra. For instance, to transpose a product, we can transpose each factor and reverse their order.

For square Matrices:

  1. A+Aᵀ is symmetric
  2. A-Aᵀ is anti-symmetric
  3. A can be expressed as an addition of a Symmetric matrix and an anti-symmetric matrix. 4 A·Aᵀ is symmetric

Trace

  1. tr(A+B) = tr(A) + tr(B)
  2. tr(αA) = α·tr(A)
  3. tr(A) = tr(Aᵀ)
  4. tr(AB) = tr(BA)

Identity and Zero Matrices

Identity Matrix: The identity matrix $I_n$ is a special $n\times n$ (square) matrix with 1's on the main diagonal and 0's elsewhere. For example,

$$ I_1 = [ 1 ] $$ $$ I_2 = \begin{pmatrix}1 & 0\ 0 & 1 \end{pmatrix} $$ $$ I_3 = \begin{pmatrix}1 & 0 & 0\ 0 & 1 & 0\ 0 & 0 & 1\end{pmatrix} $$ $$ ... $$

The identity matrix plays the role of multiplicative unity: multiplying by $I$ leaves a matrix unchanged (assuming dimensions fit). In fact, $I_n A = A$ and $A I_n = A$ for any $n\times n$ matrix $A$. The identity matrix is analogous to the number 1 in regular multiplication.

Zero Matrix: A zero matrix (or null matrix) is a matrix of any size whose entries are all 0. We often denote a zero matrix as $0$ (with a subscript for size if needed, e.g. $0_{m,n}$ for an $m\times n$ zero matrix). For example, $0_{2,2} = \begin{pmatrix}0 & 0\[3pt]0 & 0\end{pmatrix}$ and $0_{1,3} = [,0;;0;;0,]$. The zero matrix is the additive identity for matrix addition: $A + 0 = A$ for any $A$ (of the same dimensions). There is a zero matrix for each dimension (it’s analogous to the number 0 for real numbers).

Identity and zero matrices are basic building blocks in matrix algebra. In summary, $I_n$ leaves matrices unchanged under multiplication, and $0_{m,n}$ leaves matrices unchanged under addition.

Summary: Key Properties

  • Addition: Only defined for same-sized matrices (entrywise addition). It is commutative and associative. The additive identity is the zero matrix $0$ (all zeros), and every matrix $A$ has an additive inverse $-A$ such that $A + (-A) = 0$.
  • Scalar Multiplication: Multiply every entry by the scalar. Distributes over matrix addition and over scalar (real) addition. Satisfies $1\cdot A = A$ and $k(\ell A) = (k\ell)A$.
  • Matrix Multiplication: $AB$ is defined if columns of $A$ = rows of $B$. The product is associative and distributive over addition. Not commutative in general ($AB \neq BA$ in most cases). The identity matrix $I_n$ acts as a neutral element for $n\times n$ matrices (${I_n}A = A$ and $A{I_n} = A$).
  • Transpose: $(A^T)^T = A$. Transposition swaps dimensions ($m\times n$ becomes $n\times m$). It respects addition and scalar multiplication: $(A+B)^T = A^T + B^T$, $(kA)^T = kA^T$. For multiplication, note $(AB)^T = B^T A^T$.

These foundational operations and properties provide the groundwork for all matrix algebra. Mastering them will make it easier to learn advanced topics like determinants, inverses, and linear transformations. Enjoy your journey into matrix theory!

Others:

Hadamard Product

The product of the elements at the same position. $c_{rc} = a_{rc}·b_{rc}$

𝐀_{₂⨯₂} = 
\begin{bmatrix}
1 & 2 \\
3 & 4
\end{bmatrix}
   
𝐁_{₂⨯₂} = 
\begin{bmatrix}
3 & 5 \\
7 & 9
\end{bmatrix}
𝐂_{₂⨯₂} = 𝐀_{₂⨯₂}∙𝐁_{₂⨯₂} =
\begin{bmatrix}
1·3 & 2·5 \\
3·7 & 4·9
\end{bmatrix}
=
\begin{bmatrix}
3 & 10 \\
21 & 36
\end{bmatrix}

From Matrix Algebra to Abstract Algebra 🌐

Matrix operations reveal deeper algebraic structure:

  • Abelian Group under Addition: Matrices add associatively and commutatively, with identities and inverses.
  • Vector Space: With scalar multiplication, matrices form a space over $\mathbb{R}$.
  • Ring: With both addition and multiplication, matrices form a non-commutative ring with identity (for square matrices).

This bridges into abstract algebra:

  • Groups: Invertible $n * n$ matrices form the general linear group $GL(n)$.
  • Modules: Vector spaces generalized to arbitrary rings.

Why This Matters 🌱

Matrix algebra is not just computational—it reflects the logic of structure. Each property (associativity, distributivity, identity) connects to broader abstract frameworks. Seeing matrices as transformations, functions, and ring elements brings you naturally into the language of algebraic abstraction.

Summary 💡

Matrix algebra introduces the core algebraic ideas of identity, inverse, composition, and structure. These ideas prepare you for abstract algebra, where familiar operations on matrices become a model for much more general mathematical systems. Learning matrix operations is your first encounter with the machinery of abstraction.