43. Vector Spaces - JulTob/Mathematics GitHub Wiki

Vector Spaces and Their Models in Type Theory

ℝ: The Simplest Vector Space

The real numbers ℝ form a vector space over themselves. Addition and scalar multiplication are standard operations.

The sum of two real numbers is an operation that associates to each pair of real numbers $a$ and $b$ another real number, denoted by $a + b$.

\color{gold}
+β € : \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}, \quad (a, b) \mapsto a + b
\color{gold}
[ℝ] + [ℝ]= [ℝ]

The sum of real numbers is:

  • commutative:
    • $a+b=b+a$ for each $a,bβˆˆβ„$;
  • associative:
    • $(a+b)+c=a+(b+c)$ for each $a,b,cβˆˆβ„$
  • admits a neutral element
    • there exists a number, $0$, such that $0+a = a+0 = a$ for every $a ∈ ℝ$;
  • every real number a admits an opposite
    • there exists a number, which we denote by $βˆ’a$ or $Β¬a$, such that:
      • $a + (Β¬a) = 0 = a + a^{\neg}$.

The product of two real numbers is an operation that associates to each pair of real numbers $a$ and $b$ another real number, denoted by $ab$. Therefore, the product is a function whose domain is $ℝ Γ— ℝ$ and codomain is $ℝ$:

\color{gold}
Γ— β €	: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}, \quad (a, b) \mapsto ab
\color{gold}
[ℝ] Γ— [ℝ]= [ℝ]

The product of real numbers is:

  • commutative:
    • $ab=ba$ for each $a,bβˆˆβ„$;
  • associative:
    • $(ab)c=a(bc)$ for each $a,b,cβˆˆβ„$
  • admits a neutral element
    • there exists a number, $1$, such that $1a = a1 = a$ for every $a ∈ ℝ$;
  • Distributive with respect to the sum
    • $a(b+c) = ab + ac$ for every $a,b,c∈ ℝ$

ℝ²: Ordered Pairs

%%{init: {"quadrantChart": {"pointRadius": 1, "pointTextPadding": 13, "titlePadding": 20}}}%%

quadrantChart
    title Vector space in 2D
    x-axis x
    y-axis y

    A: [0.3, 0.6]
    B: [0.45, 0.23]
    C: [0.57, 0.99]
    D: [0.78, 0.34]
    E: [0.40, 0.34]
    F: [0.35, 0.78] 

We denote by the symbol ℝ² the set of ordered pairs of real numbers:

ℝ² = ο½›οΌˆx,yοΌ‰ : x,y ∈ ℝ}

The fact that the pairs are ordered means, for example, that the element $(1, 2)$ is different from the element $(2, 1)$.

Once we fix a Cartesian coordinate system in a plane, there is a correspondence between ℝ² and the set of points in the plane.

Attaching a Cartesian reference to the plane means fixing two oriented perpendicular lines r and s and a unit of measure. The point of intersection between the two straight lines is called the origin of the reference system.

Each point of the plane is then uniquely identified by a pair of real numbers, called coordinates of the point, which indicate the distance of the point from the line s and its distance from the line r, respectively.

The student who is not familiar with the Cartesian plane can think of the boardgame Battleship.

  • type $ℝ² := ℝ Γ— ℝ$
  • zero : $ℝ²$ := $(0, 0)$
  • Sum:

    • $+ ∢ ℝ² Γ— ℝ² β†’ ℝ²$

      • $[ℝ²]+[ℝ²]=[ℝ²]
      • $Ξ»((x,y),(xβ€²,yβ€²)) ↦ (x,y)+(xβ€²,yβ€²)=(x+xβ€²,y+yβ€²)$
    • Commutative
    • Associative
    • Neutral $(0,0)$
    • Opposite $(οΏ’x,οΏ’y)$
  • Multiplication by a real number (scalar):

    • $β‹…β €βˆΆ ℝ×ℝ²→ ℝ²$

      • $[ℝ]Β·[ℝ²] = [ℝ²]$
      • $Ξ»(r, (x, y)) ↦ r(x, y) = (rx, ry)$
    • Distributive
    • $(π‘Ž+𝑏)(π‘₯,𝑦)=π‘Ž(π‘₯,𝑦) + 𝑏(π‘₯,𝑦)$
    • $(π‘Žπ‘)(π‘₯,𝑦) = π‘Ž(𝑏(π‘₯,𝑦))$
    • $1(π‘₯,𝑦)=(π‘₯,𝑦)$

ℝⁿ: General Tuples

We can generalize to any size of tuples of real numbers: n-tuples.

(x_1,...,x_n) + (x'_1,...,x'_n) = (x_1 + x'_1 ,..., x_n+x'_n)
π‘Ž(x_1,...,x_n) = (π‘ŽΒ·x_1,...,π‘ŽΒ·x_n)
\begin{matrix}
type     & ℝⁿ     & :   & ⨉^n ℝ         & :=  & ⟨x_i⟩ \\
const    & zero   & :   & \vec{𝟎}       & :   & ℝⁿ     & := & ⟨0_i⟩ \\
function & add    & is  & (a:ℝⁿ, b:ℝⁿ)  & :   & ℝⁿ     & := & ⟨a_i+b_i⟩ \\
function & scale  & is  & (r:ℝ, v:ℝⁿ)   & :   & ℝⁿ     & := & r⟨v_i⟩ := ⟨rΒ·v_i⟩ \\
\end{matrix}

Square Matrices

𝗠_2(ℝ) = \begin{Bmatrix}
\begin{pmatrix}
a & b \\
c & d 
\end{pmatrix} | a,b,c,d 
βˆˆβ„
\end{Bmatrix}

Matrix operations are defined elementwise.

We define the sum as

  • $+ ∢ 𝗠_2(ℝ) Γ— 𝗠_2(ℝ) β†’ 𝗠_2(ℝ)$

    • [𝗠_2(ℝ)] + [𝗠_2(ℝ)] = [𝗠_2(ℝ)]
\begin{pmatrix}
a & b \\
c & d 
\end{pmatrix} + \begin{pmatrix}
a' & b' \\
c' & d' 
\end{pmatrix} = \begin{pmatrix}
a+a' & b+b' \\
c+c' & d+d' 
\end{pmatrix}
  • $Β· ∢ ℝ Γ— 𝗠_2(ℝ) β†’ 𝗠_2(ℝ)$

    • [𝗠_2(ℝ)] Γ— [𝗠_2(ℝ)] = [𝗠_2(ℝ)]
𝛼· \begin{pmatrix}
a & b \\
c & d 
\end{pmatrix} = \begin{pmatrix}
𝛼·a & 𝛼·b \\
𝛼·c & 𝛼·d 
\end{pmatrix}
\begin{matrix}
type     & 𝕄_{2β¨―2} & :   & (ℝ Γ— ℝ) Γ— (ℝ Γ— ℝ)  & :=  & \begin{pmatrix}
a & b \\
c & d 
\end{pmatrix}  \\
const    & zero & :   & 𝕄_{2β¨―2}                               & := & [0_{ij}] \\
function & add  & is  & (a:𝕄_{2β¨―2}, b:𝕄_{2β¨―2}) & : & 𝕄_{2β¨―2} & := & [a_{ij}+b_{ij}] \\
function & scale & is  & (r:ℝ, m:𝕄_{2β¨―2})      & : & 𝕄_{2β¨―2} & := & [rΒ·m_{ij}]  \\
\end{matrix}

Vector Spaces

A real vector space is a set (or type) $V$ equipped with two operations called sum and scalar multiplication, being the scalar of type $𝕂$.

\begin{matrix}
+: Vβ¨―V⟢V     & & &  Β·:𝕂⨯V ⟢ V \\
+(𝒖,𝒗) ⟼ 𝒖+𝒗 & & &  Β·(𝛼,𝒖) ⟼ 𝛼𝒖 \\
\end{matrix}

satisfying the following properties:

  • For all $𝒖,𝒗,π’˜βˆˆV$ and $𝛼,π›½βˆˆπ•‚$
  1. Commutative
    • $𝒖+𝒗 = 𝒗+𝒖$
  2. Associative
    • $(𝒖+𝒗)+π’˜=𝒖+(𝒗+π’˜)$
  3. Neutral Element of the sum
    • $𝟎+𝒖 = 𝒖$
    • $𝟎∈V$
  4. Opposite
    • $𝒖+π’–Μš = 0$
  5. Scalar unity
    • $1𝒖=𝒖$
  6. $(𝛼𝛽)𝒖 = 𝛼(𝛽𝒖)$
  7. $𝛼(𝒖+𝒗) = 𝛼𝒖+𝛼𝒗$
  8. $(𝛼+𝛽)𝒖 = 𝛼𝒖+𝛽𝒖$

The elements of a vector space are called vectors, while the 𝕂-set are called scalars. The neutral element of the sum in V is called the zero vector. To distinguish vectors from numbers we will indicate the vectors in bold.

Things that can be vector spaces

  • Functions that follow
    • $(f+g)(x) = f(x) + g(x)$
    • $(𝛼f)(x) = 𝛼f(x)$
  • Polynomials
    • Consider the set $ℝ[x]$ of polynomials with real coefficients in the variable $x$.

Non-essential properties

We can also derive these extra properties.

  • The zero vector is unique, denoted $𝟎_V$
  • For every vector $𝒖$ there exists only one opposite $ᅭ𝒖$ such as $𝒖+π’–Μš = 𝟎_V$
  • $π›ΌΒ·πŸŽ_V = 𝟎_V$ for all 𝛼
  • $0·𝒖 = 𝟎_V$ for any 𝒖
  • $-𝛼·𝒖 = π›ΌΒ·π’–Μš = οΏ’(𝛼·𝒖)

The trivial vector space, denoted with $𝟎_V$, is a vector space consisting only of the zero vector.

  • $𝟎_V + 𝟎_V = 𝟎_V$
  • $π›ΌΒ·πŸŽ_V = 𝟎_V$ for all 𝛼

A real vector space can never be empty. It must contain the zero vector. It could happen it only had the zero vector. We call those trivial.

A vector space is non-trivial if it has at least one vector other than the zero vector. As we can multiply by real numbers as scalars then it should contain an infinite amount of elements, all the multiples of that vector $𝒗$. This is all the vectors of the form

πœ†π’— β €β €βˆ€πœ†βˆˆβ„

Therefore, a line that goes through the origin is a vector space.

---
title: "Vector Space"
config:
  radar:
    axisScaleFactor: 1
    curveTension: 0

  theme: dark
  themeVariables:
    cScale0: "#FF0000"
    cScale1: "#00FF00"
    cScale2: "#FFFF00"
    
    radar:
      curveOpacity: 0.075
      graticuleColor: cyan 
      graticuleOpacity: 0.05
---
radar-beta
  axis a["𝔸"], b["𝔹"], c["β„‚"]
  axis d["𝔻"], e["𝔼"], f["𝔽"]
  curve v1["V1"]{3, 3, 0, 2, 5, 4}
  curve v2["V2"]{7, 5, 4, 1.5, 2, 1}
  curve v3["V1+V2"]{10, 8, 4, 3.5, 7, 5}

  max 10
  min 0
  ticks 10
  graticule polygon

Examples

  • ℝⁿ: vectors of dimension n
  • M_n(ℝ): nΓ—n matrices
  • ℝ[x]: polynomials with real coefficients
  • (X β†’ ℝ): real-valued functions

A non-trivial space contains some v β‰  zero, and by scalar multiplication, all multiples aΒ·v exist.

A one-dimensional vector space corresponds to a line through the origin