Linear Algebra

$ % macros % MathJax \newcommand{\bigtimes}{\mathop{\vcenter{\hbox{$\Huge\times\normalsize$}}}} % prefix with \! and postfix with \!\! % sized grouping symbols \renewcommand {\aa} [1] {\left\langle {#1} \right\rangle} % <> angle brackets \newcommand {\bb} [1] {\left[ {#1} \right]} % [] brackets \newcommand {\cc} [1] {\left\{ {#1} \right\}} % {} curly braces \newcommand {\mm} [1] {\lVert {#1} \rVert} % || || double norm bars \newcommand {\nn} [1] {\lvert {#1} \rvert} % || norm bars \newcommand {\pp} [1] {\left( {#1} \right)} % () parentheses % unit \newcommand {\unit} [1] {\bb{\mathrm{#1}}} % measurement unit % math \newcommand {\fn} [1] {\mathrm{#1}} % function name % sets \newcommand {\setZ} {\mathbb{Z}} \newcommand {\setQ} {\mathbb{Q}} \newcommand {\setR} {\mathbb{R}} \newcommand {\setC} {\mathbb{C}} % arithmetic \newcommand {\q} [2] {\frac{#1}{#2}} % quotient, because fuck \frac % trig \newcommand {\acos} {\mathrm{acos}} % \mathrm{???}^{-1} is misleading \newcommand {\asin} {\mathrm{asin}} \newcommand {\atan} {\mathrm{atan}} \newcommand {\atantwo} {\mathrm{atan2}} % at angle = atan2(y, x) \newcommand {\asec} {\mathrm{asec}} \newcommand {\acsc} {\mathrm{acsc}} \newcommand {\acot} {\mathrm{acot}} % complex numbers \newcommand {\z} [1] {\tilde{#1}} \newcommand {\conj} [1] {{#1}^\ast} \renewcommand {\Re} {\mathfrak{Re}} % real part \renewcommand {\Im} {\mathrm{I}\mathfrak{m}} % imaginary part % quaternions \newcommand {\quat} [1] {\tilde{\mathbf{#1}}} % quaternion symbol \newcommand {\uquat} [1] {\check{\mathbf{#1}}} % versor symbol \newcommand {\gquat} [1] {\tilde{\boldsymbol{#1}}} % greek quaternion symbol \newcommand {\guquat}[1] {\check{\boldsymbol{#1}}} % greek versor symbol % vectors \renewcommand {\vec} [1] {\mathbf{#1}} % vector symbol \newcommand {\uvec} [1] {\hat{\mathbf{#1}}} % unit vector symbol \newcommand {\gvec} [1] {\boldsymbol{#1}} % greek vector symbol \newcommand {\guvec} [1] {\hat{\boldsymbol{#1}}} % greek unit vector symbol % special math vectors \renewcommand {\r} {\vec{r}} % r vector [m] \newcommand {\R} {\vec{R}} % R = r - r' difference vector [m] \newcommand {\ur} {\uvec{r}} % r unit vector [#] \newcommand {\uR} {\uvec{R}} % R unit vector [#] \newcommand {\ux} {\uvec{x}} % x unit vector [#] \newcommand {\uy} {\uvec{y}} % y unit vector [#] \newcommand {\uz} {\uvec{z}} % z unit vector [#] \newcommand {\urho} {\guvec{\rho}} % rho unit vector [#] \newcommand {\utheta} {\guvec{\theta}} % theta unit vector [#] \newcommand {\uphi} {\guvec{\phi}} % phi unit vector [#] \newcommand {\un} {\uvec{n}} % unit normal vector [#] % vector operations \newcommand {\inner} [2] {\left\langle {#1} , {#2} \right\rangle} % <,> \newcommand {\outer} [2] {{#1} \otimes {#2}} \newcommand {\norm} [1] {\mm{#1}} \renewcommand {\dot} {\cdot} % dot product \newcommand {\cross} {\times} % cross product % matrices \newcommand {\mat} [1] {\mathbf{#1}} % matrix symbol \newcommand {\gmat} [1] {\boldsymbol{#1}} % greek matrix symbol % ordinary derivatives \newcommand {\od} [2] {\q{d #1}{d #2}} % ordinary derivative \newcommand {\odn} [3] {\q{d^{#3}{#1}}{d{#2}^{#3}}} % nth order od \newcommand {\odt} [1] {\q{d{#1}}{dt}} % time od % partial derivatives \newcommand {\de} {\partial} % partial symbol \newcommand {\pd} [2] {\q{\de{#1}}{\de{#2}}} % partial derivative \newcommand {\pdn} [3] {\q{\de^{#3}{#1}}{\de{#2}^{#3}}} % nth order pd \newcommand {\pdd} [3] {\q{\de^2{#1}}{\de{#2}\de{#3}}} % 2nd order mixed pd \newcommand {\pdt} [1] {\q{\de{#1}}{\de{t}}} % time pd % vector derivatives \newcommand {\del} {\nabla} % del \newcommand {\grad} {\del} % gradient \renewcommand {\div} {\del\dot} % divergence \newcommand {\curl} {\del\cross} % curl % differential vectors \newcommand {\dL} {d\vec{L}} % differential vector length [m] \newcommand {\dS} {d\vec{S}} % differential vector surface area [m^2] % special functions \newcommand {\Hn} [2] {H^{(#1)}_{#2}} % nth order Hankel function \newcommand {\hn} [2] {H^{(#1)}_{#2}} % nth order spherical Hankel function % transforms \newcommand {\FT} {\mathcal{F}} % fourier transform \newcommand {\IFT} {\FT^{-1}} % inverse fourier transform % signal processing \newcommand {\conv} [2] {{#1}\ast{#2}} % convolution \newcommand {\corr} [2] {{#1}\star{#2}} % correlation % abstract algebra \newcommand {\lie} [1] {\mathfrak{#1}} % lie algebra % other \renewcommand {\d} {\delta} % optimization %\DeclareMathOperator* {\argmin} {arg\,min} %\DeclareMathOperator* {\argmax} {arg\,max} \newcommand {\argmin} {\fn{arg\,min}} \newcommand {\argmax} {\fn{arg\,max}} % waves \renewcommand {\l} {\lambda} % wavelength [m] \renewcommand {\k} {\vec{k}} % wavevector [rad/m] \newcommand {\uk} {\uvec{k}} % unit wavevector [#] \newcommand {\w} {\omega} % angular frequency [rad/s] \renewcommand {\TH} {e^{j \w t}} % engineering time-harmonic function [#] % classical mechanics \newcommand {\F} {\vec{F}} % force [N] \newcommand {\p} {\vec{p}} % momentum [kg m/s] % \r % position [m], aliased \renewcommand {\v} {\vec{v}} % velocity vector [m/s] \renewcommand {\a} {\vec{a}} % acceleration [m/s^2] \newcommand {\vGamma} {\gvec{\Gamma}} % torque [N m] \renewcommand {\L} {\vec{L}} % angular momentum [kg m^2 / s] \newcommand {\mI} {\mat{I}} % moment of inertia tensor [kg m^2/rad] \newcommand {\vw} {\gvec{\omega}} % angular velocity vector [rad/s] \newcommand {\valpha} {\gvec{\alpha}} % angular acceleration vector [rad/s^2] % electromagnetics % fields \newcommand {\E} {\vec{E}} % electric field intensity [V/m] \renewcommand {\H} {\vec{H}} % magnetic field intensity [A/m] \newcommand {\D} {\vec{D}} % electric flux density [C/m^2] \newcommand {\B} {\vec{B}} % magnetic flux density [Wb/m^2] % potentials \newcommand {\A} {\vec{A}} % vector potential [Wb/m], [C/m] % \F % electric vector potential [C/m], aliased % sources \newcommand {\I} {\vec{I}} % line current density [A] , [V] \newcommand {\J} {\vec{J}} % surface current density [A/m] , [V/m] \newcommand {\K} {\vec{K}} % volume current density [A/m^2], [V/m^2] % \M % magnetic current [V/m^2], aliased, obsolete % materials \newcommand {\ep} {\epsilon} % permittivity [F/m] % \mu % permeability [H/m], aliased \renewcommand {\P} {\vec{P}} % polarization [C/m^2], [Wb/m^2] % \p % electric dipole moment [C m], aliased \newcommand {\M} {\vec{M}} % magnetization [A/m], [V/m] \newcommand {\m} {\vec{m}} % magnetic dipole moment [A m^2] % power \renewcommand {\S} {\vec{S}} % poynting vector [W/m^2] \newcommand {\Sa} {\aa{\vec{S}}_t} % time averaged poynting vector [W/m^2] % quantum mechanics \newcommand {\bra} [1] {\left\langle {#1} \right|} % <| \newcommand {\ket} [1] {\left| {#1} \right\rangle} % |> \newcommand {\braket} [2] {\left\langle {#1} \middle| {#2} \right\rangle} $

This is a brief review of linear algebra. Descriptions are terse. The reader is encouraged to refer to references for more in depth treatment.

  • Vectorspace
  • Linear Maps
    • Definition
    • Composition
    • Matrix Notation
    • Linear Functional
    • Change of Basis
    • Linear Map Transformation Law
    • Covariance and Contravariance
  • Multilinear Maps
    • Bilinear Map
    • Tensors (Multilinear Maps)
    • Tensor Product
    • Outer Product
    • Tensor Contraction
  • Normed Vectorspace
    • Euclidean Norm
    • Norm Axioms
    • P Norm
    • Unit Vectors
    • Distance
  • Inner Product Vectorspace
    • Dot Product
    • Inner Product Axioms
    • Complex Inner Product
    • Induced Norm
    • Projection
    • Cauchy-Schwarz Inequality
    • Orthogonality
    • Types of Bases
    • Standard Basis
    • Orthonormalization
    • Orthonormal Bases
    • Orthogonal Bases
    • Skew Bases
    • Dual Vectorspace
    • Metric
  • Exterior Algebra
    • Cross Product
    • Determinant
    • Wedge Product
    • Hodge Star
  • scalar triple product
  • vector triple product
  • ================
  • linear system
  • matrix
  • matrix multiplication
  • linear transformation
  • geometric transformations
    • scale
    • shear
    • rotation
    • reflection
    • projection
  • pseudoscalars and pseudovectors
  • ================
  • elementary matricies
  • gaussian elimination
  • ================
  • transpose
  • trace
  • matrix inverse
  • eigenproblem
  • ================
  • fundamanetal matrix spaces
  • range and rank
  • nullspace and nullity
  • rank nullity theorem
  • fundamental theorem of linear algebra
  • ================
  • special matricies
    • square matrix
    • block matrix
    • diagonal matrix
    • block diagonal matrix
    • orthogonal matrix
    • symmetric matrix
    • skew-symmetric matrix
    • unitary matrix
    • hermitian matrix
    • skew-hermitian matrix
    • triangular matrix
    • toeplitz matrix
    • sparse matrix
    • normal matricies
  • ================
  • similarity
  • diagonalization
  • ================
  • matrix factorization
    • SVD
    • eigendecomposition
    • cholesky
    • rank factoization
    • LU
    • QR
  • ================
  • affine spaces
  • ================
  • completeness
  • banach spaces
  • hilbert spaces
  • ================
  • overdetermined system
  • underdetermined system
  • pseudoinverse
  • adjoint operator
  • ==============
  • References

Vectorspaces

Vectorspaces are motivated by classical models of physical space. Informally, they describe things that can be added together and scaled.

Axioms

A vectorspace is defined as a 4-tuple $(V, F, \oplus, \odot)$ consisting of set $V$, field $(F,+,\cdot)$, and functions vector addition $\oplus$ and scalar multiplication $\odot$ such that for $\vec{u}, \vec{v}, \vec{w} \in V$ and $a, b \in F$ $$ \begin{align} \label{eqn:vectorspace} & \oplus: V \times V \rightarrow V &\text{closure} \\ & \vec{u} \oplus \vec{v} = \vec{v} \oplus \vec{u} &\text{commutative} \\ & [\vec{u} \oplus \vec{v}] \oplus \vec{w} = \vec{u} \oplus [\vec{v} \oplus \vec{w}] &\text{associative} \\ & \exists \vec{0} \in V,\, \vec{u} \oplus \vec{0} = \vec{u} &\text{existence of additive identity} \\ & \exists [-\vec{u}] \in V,\, \vec{u} \oplus [-\vec{u}] = \vec{0} &\text{existence of additive inverse} \\ & \nonumber \\ & \odot: V \times F \rightarrow V &\text{closure} \\ & \vec{u} \odot [a b] = [\vec{u} \odot a] \odot b &\text{compatibility of multiplication} \\ & \vec{u} \odot [a + b] = [\vec{u} \odot a] \oplus [\vec{v} \odot b] &\text{distributive over scalar addition} \\ & [\vec{u} \oplus \vec{v}] \odot a = [\vec{u} \odot a] \oplus [\vec{u} \odot a] &\text{distributive over vector addition} \\ & \exists 1 \in F,\, \vec{u} \odot 1 = \vec{u} &\text{existence of multiplicative identity} . \end{align} $$ A vectorspace is usually denoted $V(F)$, which is read vectorspace $V$ over field $F$, or just $V$. Elements of $V$ are called vectors, while elements of $F$ are called scalars.

Vector subtraction is shorthand for vector addition of an additive inverse $$ \vec{u} - \vec{v} \equiv \vec{u} \oplus [-\vec{v}] . $$

Examples of vectorspaces include $\setR^n$, $\setC^n$, matrices, and square integrable functions.

Subspaces

A vector subspace is a subset $U$ of a vectorspace $V$ that is itself a vectorspace. Any subset of a vectorspace automatically satisfies all vectorspace axioms except closure. Thus, to verify a subset is a subspace it is sufficient to verify only the closure axioms. All vectorspaces contain the trivial subspace $\cc{\vec{0}}$.

Linear Combinations

Consider vectorspace $V(F)$. Let set $A$ be a subset of $V$. A linear combination is the most general way of combining the vectors in $A$ using vector addition and scalar multiplication $$ \sum_i \vec{v}_i a_i . $$ The scalars in a linear combination are sometimes called coefficients.

Linear Independence

Consider vectorspace $V(F)$. Let set $A$ be a subset of $V$. Set $A$ is linearly dependent if there exists a vector in $A$ that can be written as a linear combination of other vectors in $A$. Otherwise set $A$ is linearly independent. Linear independence is equivalent to the condition that the zero vector $\vec{0}$ can only be written as a linear combination of vectors in $A$ only if all coefficients equal zero $$ \vec{0} = \sum_i \vec{u}_i a_i \quad\Rightarrow\quad a_i = 0 $$

Span

Consider vectorspace $V(F)$. Let set $A$ be a subset of $V$. The span of $A$ is the set of all linear combinations of vectors in $A$ $$ \mathrm{span}(A) \equiv \cc{\sum_i \vec{u}_i a_i \,\bigg|\, \vec{u}_i \in A, a_i \in F} . $$ Any span is automatically a subspace of $V$.

Bases

Consider vectorspace $V(F)$. A basis $B$ is a subset of $V$ that is linearly independent and spans $V$. By definition any vector can be written as a linear combination of vectors in $B$ $$ \vec{u} = \sum_i \vec{b}_i u^i $$ The coefficients $u^i$ are unique as a consequence of the linear independence of $B$. This is proved by examining the difference between different expansions of a vector $\vec{u}$ $$ \vec{0} = \vec{u} - \vec{u} = \sum_i \vec{b}_i u^i - \sum_i \vec{b}_i {u'}^i = \sum_i \vec{b}_i [u^i - {u'}^i] \quad \Rightarrow \quad u^i = {u'}^i $$ The coefficients $u^i$ can be added like vectors $$ \vec{u} + \vec{v} = \sum_i \vec{b}_i u^i + \sum_i \vec{b}_i v^i = \sum_i\vec{b}_i [u^i + v^i] $$ and scaled like vectors $$ \vec{u} a = \bb{\sum_i \vec{b}_i u^i} a = \sum_i \vec{b}_i [u^i a] $$ and are thus themselves vectors called coordinate vectors. A vector is isomorphic to its coordinate vector with respect to basis $B$ $$ \vec{u} \Leftrightarrow u^i . $$ This is significant because it allows irreducible vectors to be represented by arrays of numbers, which facilitates computation.

Dimension

If $\fn{span}(B) \neq V$, a linearly independent vector can be added to set $B$ to enlarge the span. If $B$ is not linearly independent, a linearly dependent vector can be removed from $B$ without changing the span. The number of elements in a basis depends only on the vectorspace. The dimension of a vectorspace $V$ is defined as the cardinality (number of elements) of a basis $B$ $$ \mathrm{dim}(V) \equiv |B| . $$

Direct Sum

The direct sum of vectorspaces $U(F)$ and $V(F)$ is the vectorspace $U \oplus V$ defined as the quotient set of the free vectorspace of $U \times V$ $$ U \oplus V \equiv \fn{free}(U \times V) / \sim $$ where the equivalence relation $\sim$ satisfies the following addition-like properties $$ \begin{align} (\vec{u}, \vec{v}) + (\vec{u}', \vec{v}') &\sim (\vec{u} + \vec{u}', \vec{v} + \vec{v}') \\ (\vec{u}, \vec{v}) a &\sim (\vec{u} a, \vec{v} a) . \end{align} $$ Elements of the equivalence classes are denoted $(\vec{u}, \vec{v}) = \vec{u} \oplus \vec{v} \in U \oplus V$.

The dimension of the direct sum vectorspace is $$ \fn{dim}(U \oplus V) = \fn{dim}(U) + \fn{dim}(V) . $$

Linear Maps

A linear map $L : U \rightarrow V$ is a function from vectorspace $U$ to vectorspace $V$ that obeys linearity $$ \boxed{ L(\vec{u} a + \vec{u}' a') = L(\vec{u}) a + L(\vec{u}') a' } . $$

Linear maps can be uniquely represented with respect to a choice of bases by a set of scalar coefficients. Let $L : U \rightarrow V$ be a linear map taking $\vec{u} \in U$ to $\vec{v} \in V$ $$ \vec{v} = L(\vec{u}) . $$ Expand vectors $\vec{u}$ and $\vec{v}$ with respect to basis vectors $\vec{u}_i \in B_u \subset U$ and $\vec{v}_j \in B_v \subset V$ using expansion coefficients $u^i \in F$ and $v^j \in F$ respectively $$ \sum_j \vec{v}_j v^j = L\pp{\sum_i \vec{u}_i u^i} . $$ The right hand side is simplified with linearity and expanded with respect to $B_v$ $$ L\pp{\sum_i \vec{u}_i u^i} = \sum_i L(\vec{u}_i) u^i = \sum_i \bb{\sum_j \vec{v}_j L^j_i} u^i = \sum_j \vec{v}_j \bb{\sum_i L^j_i u^i} $$ where $L^j_i$ are the coordinate vectors for $L(\vec{u_i})$. Uniqueness of basis expansion implies $$ \label{eqn:linear-transform} \boxed{ v^j = \sum_i L^j_i u^i } $$ This equation is important for a few reasons. It shows that once bases are specified, a linear map is fully specified by an array of scalars. Practically this eliminates irreducible vectors in favor of arrays of scalars, which are easily handled by computers.

Composition

Let $A : U \rightarrow V$ and $B : V \rightarrow W$ be linear maps. These maps can be combined by functional composition resulting in the map $B \circ A : U \rightarrow W$ such that $$ \vec{w} = B(\vec{v}) = B(A(\vec{u})) . $$ Given a choice of bases, this equation in index notation is seen to be linear $$ w^k = \sum_j {B}^k_j v^j = \sum_j {B}^k_j \bb{\sum_i A^j_i u^i} = \sum_i \bb{\sum_j {B}^k_j A^j_i} u^i = \sum_i C^k_i u^i $$ where $C : U \rightarrow W$ is a linear map with coefficients equal to $$ \label{eqn:linear-composition} \boxed{ C^k_i = \sum_j {B}^k_j A^j_i } $$

Matrix Notation

The structure and verbosity of equations \eqref{eqn:linear-transform} and \eqref{eqn:linear-composition} motivate matrix notation and matrix multiplication $$ \boxed{ C^k_i = \sum_j {B}^k_j A^j_i \quad\Leftrightarrow\quad \mat{C} = \mat{B} \mat{A} } $$ $$ \vec{v} = \mat{L} \vec{u} $$ $$ \vec{u} \Leftrightarrow \begin{bmatrix} u^1 \\ u^2 \\ \vdots \\ u^n \end{bmatrix} $$ $$ \vec{u} = \bb{\vec{b}_1 | \vec{b}_2 | \ldots | \vec{b}_n} \begin{bmatrix} u^1 \\ u^2 \\ \vdots \\ u^n \end{bmatrix} $$ Matrix notation is advantageous because it suppresses indices, which makes equations easier to read and faster write. However matrix notation is less flexible than index notation. It is important to be fluent in both notations, and to know how to translate between the notations. When in doubt, fall back to index notation.

Index notation is commonly simplified by Einstein notation which suppresses summation signs that are implied by pairs of like subscripts and superscripts symbolizing co- and contravariant indices respectively (discussed in more detail below) $$ C_i^k = \sum_j B_j^k A_i^j \quad\Leftrightarrow\quad C_i^k = B_j^k A_i^j $$ Einstein notation hides a lot of information. For pedagogical reasons, it is not used in these notes.

Change of Basis

A change of basis is a function from a vectorspace $V$ to itself that maps one basis $B$ to a second basis $B'$. Such a function is shown to be a linear map and posses an inverse linear map. These maps are needed to describe how coordinate vectors and other linear concepts transform between bases.

Expand vector $\vec{v} \in V$ in bases $B \subset V$ and $B' \subset V$ with scalar coefficients $v^i \in F$ and $v'^j \in F$ respectively $$ \vec{v} = \sum_i \vec{b}_i v^i = \sum_j \vec{b}'_j v'^j $$ let $C : V \rightarrow V$ be a linear transformation such that $$ \boxed{ \vec{b}'_j = C(\vec{b}_j) } $$ Expand $$ \sum_j \vec{b}'_j v'^j = \sum_j C(\vec{b}_j) v'^j = \sum_j \bb{\sum_i \vec{b}_i C^i_j} v'^j = \sum_i \vec{b}_i \bb{\sum_j C^i_j v'^j} $$ $$ \boxed{ v^i = \sum_j C^i_j v'^j } $$ let $C^{-1} : V \rightarrow V$ be a linear transformation such that $$ \boxed{ \vec{b}_i = C^{-1}(\vec{b}'_i) } $$ $$ \sum_i \vec{b}_i v^i = \sum_i C^{-1}(\vec{b}'_i) v^i = \sum_i \bb{\sum_j \vec{b}'_j {C^{-1}}^j_i} v^i = \sum_j \vec{b}'_j \bb{\sum_i {C^{-1}}^j_i v^i} $$ $$ \boxed{ v'^j = \sum_i {C^{-1}}^j_i v^i } $$ $$ v^i = \sum_j C^i_j v'^j = \sum_j C^i_j \bb{\sum_k {C^{-1}}^j_k v^k} = \sum_k \bb{\sum_j C^i_j {C^{-1}}^j_k} v^k = \sum_k \delta^i_k v^k $$ $$ \boxed{ \sum_j C^i_j {C^{-1}}^j_k = \delta^i_k } $$ $$ v'^j = \sum_i {C^{-1}}^j_i v^i = \sum_i {C^{-1}}^j_i \bb{\sum_k C^i_k v'^k} = \sum_k \bb{\sum_i {C^{-1}}^j_i C^i_k} v'^k = \sum_k \delta^j_k v'^k $$ $$ \boxed{ \sum_i {C^{-1}}^j_i C^i_k = \delta^j_k } $$ $$ \boxed{ C \circ C^{-1} = C^{-1} \circ C = \mathrm{id} } $$

Linear maps are themselves a type of vector. A special case of a linear map is a linear functional, which is a linear map from a vectorspace to its field $f: V \rightarrow F$. Dual vectors?

Transformation Law

Let $L : U \rightarrow V$ be a linear map. Let $C : U \rightarrow U$ and $D : V \rightarrow V$ be change of bases. $$ v^j = \sum_i L^j_i u^i $$ $$ \sum_l D^j_l v'^l = \sum_i L^j_i \bb{\sum_k C^i_k u'^k} $$ $$ \sum_j {D^{-1}}^m_j \sum_l D^j_l v'^l = \sum_j {D^{-1}}^m_j \sum_i L^j_i \sum_k C^i_k u'^k $$ $$ \sum_l \bb{\sum_j {D^{-1}}^m_j D^j_l} v'^l = \sum_k \bb{\sum_i \sum_j {D^{-1}}^m_j L^j_i C^i_k} u'^k $$ $$ \sum_l \delta^m_l v'^l = \sum_k {L'}^m_k u'^k $$ $$ v'^m = \sum_k {L'}^m_k u'^k $$ $$ \boxed{ {L'}^m_k = \sum_i \sum_j {D^{-1}}^m_j L^j_i C^i_k } $$

Co- and Contravariance

this doesn't specify linearity. fix that $$ V^\ast = \cc{f(\vec{v}) | f : V \rightarrow F} $$

Constructed vectorspace

New vectorspaces can be constructed out of existing vectorspaces. For instance, vector Two important constructions are the direct sum and tensor product, which combine vectorspaces in a way similar to addition and multiplication. These are defined in great generality with equivalence classes.

The free vectorspace of a set $B$ is the vectorspace of all linear combinations of elements of $B$ treated as basis vectors over field $F$ $$ \fn{free}(B) \equiv \cc{\sum_i b_i a_i \bigg| b_i \in B, a_i \in F} . $$ This definition appears similar to the span of a set of vectors, but applies to a set of arbitrary elements taken to be linearly independent.

Tensors

Tensors are multilinear maps, which are a step up in complexity over linear maps but retain a sense of linearity which makes them easy to work with. They are built upon bilinear maps, which can be easily extended to multilinear maps.

Bilinear Maps

Let $B : U \times V \rightarrow W$ be a bilinear map (linear in each argument) $$ \begin{align} B(\vec{u} \alpha + \vec{u}' \alpha', \vec{v}) = B(\vec{u}, \vec{v}) \alpha + B(\vec{u}', \vec{v}) \alpha' \\ B(\vec{u}, \vec{v} \beta + \vec{v}' \beta') = B(\vec{u}, \vec{v}) \beta + B(\vec{u}, \vec{v}') \beta' \end{align} $$ This map takes two vectors from different vectorspaces and maps it to some vector in a third vectorspace $$ \vec{w} = B(\vec{u}, \vec{v}) $$ A basis expansion representation exists for $B$. Expand each vector in some basis in their respective spaces $$ \sum_k \vec{w}_k w^k = B\pp{\sum_i \vec{u}_i u^i , \sum_j \vec{v}_j v^j} $$ and use bilinearity to rearrange the right hand side into a linear combination of vectors in $W$ and expand the result in the $B_w$ basis $$ \sum_i \sum_j B(\vec{u}_i, \vec{v}_j) u^i v^j = \sum_i \sum_j \bb{\sum_k \vec{w}_k B^k_{ij} } u^i v^j = \sum_k \vec{w}_k \bb{\sum_i \sum_j B^k_{ij} u^i v^j} $$ comparing the previous two equations and invoking uniqueness of basis representation, the expansion coefficients can be equated and the irreducible vectors eliminated $$ w^k = \sum_i \sum_j B^k_{ij} u^i v^j $$ This equation indicates a bilinear map can be represented by a multidimensional array of scalars with respect to vectors, and how to evaluate the map, facilitating both characterization and computation $$ B \Leftrightarrow B^k_{ij} $$

It is interesting to note that the basis vectors used to characterize the bilinear map can be expanded in some other basis, which results in a different linear combination that results in the same array $$ B(\vec{u}_i, \vec{v}_j) = B\pp{\sum_m \vec{u}'_m {u'}^m_i, \sum_n \vec{v}'_n {v'}^n_j} = \sum_m \sum_n B(\vec{u}'_m, \vec{v}'_n) {u'}^m_i {v'}^n_j $$ this shows that the bases used to characterize a map can be different than the bases used to describe the domain and range of the map. Evidently there is some redundancy in the vectors used to characterize a bilinear map. This redundancy is eliminated by defining an equivalence class.

Tensor Product

The tensor product of two vectorspaces $U$ and $V$ is the vectorspace $U \otimes V$ defined as the quotient set of the free vectorspace of $U \times V$ $$ U \otimes V \equiv \fn{free}(U \times V) / \sim $$ where the equivalence relation $\sim$ satisfies the property of bilinearity. $$ \begin{align} (\vec{u} \alpha + \vec{u}' \alpha', \vec{v}) \sim (\vec{u}, \vec{v}) \alpha + (\vec{u}', \vec{v}) \alpha' \\ (\vec{u}, \vec{v} \beta + \vec{v}' \beta') \sim (\vec{u}, \vec{v}) \beta + (\vec{u}, \vec{v}') \beta' \end{align} $$ The dimension of the tensor product vectorspace is $$ \fn{dim}(U \otimes V) = \fn{dim}(U) \fn{dim}(V) $$

Linear transformation $B_T : U \otimes V \rightarrow W$ $$ B(\vec{u}, \vec{v}) = B_T(\vec{u} \otimes \vec{v}) $$

Outer Product

$\otimes : U \times V \rightarrow U \otimes V$

$B : V \times V \rightarrow F$ $$ B(\vec{u}, \vec{v}) = \vec{u}^T \vec{a} \vec{b}^T \vec{v} = \vec{u}^T \mat{A} \vec{v} $$ $$ \vec{a} \otimes \vec{b} = \vec{a} \vec{b}^T = \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_m \end{bmatrix} \begin{bmatrix} b_1 & b_2 & \ldots & b_n \end{bmatrix} = \begin{bmatrix} a_1 b_1 & a_1 b_2 & \ldots & a_1 b_n \\ a_2 b_1 & a_2 b_2 & \ldots & a_2 b_n \\ \vdots & \vdots & & \vdots \\ a_m b_1 & a_m b_2 & \ldots & a_m b_n \end{bmatrix} $$ $$ A_{ij} = a_i b_j $$ $$ T_{}^{} = A_{i_1, i_2}^{} B_{}^{} $$ $$ B(\vec{u}, \vec{v}) = \sum_i \sum_j A_{ij} u^i v^i $$ $$ \mat{A} \mat{B} = \sum_i \vec{A}_{\cdot i} \otimes \vec{B}_{i \cdot} $$

Normed Space

Vectorspaces are motivated by a simple model of space, however the vectorspace axioms do not capture any notion of angle, length, distance, etc. Additional functions must be defined to endow a vectorspace with geometric structure.

In Euclidean geometry the length of a vector can be computed by repeated applications of the Pythagorean theorem $$ \boxed{ \norm{\vec{u}}_2 = \bb{\sum_i u_i^2}^{\q{1}{2}} } $$ A norm $\norm{\cdot} : V \rightarrow \mathbb{R}^{+0}$ maps elements of a vectorspace $V$ to the non-negative real numbers and obeys the following properties $$ \begin{align} & \norm{\vec{u} \alpha} = \norm{\vec{u}} \nn{\alpha} & \text{} \\ & \norm{\vec{u} + \vec{v}} \leq \norm{\vec{u}} + \norm{\vec{v}} & \text{triangle inequality} \\ & \norm{\vec{u}} \geq 0 \quad\text{and}\quad \norm{\vec{u}} = 0 \Leftrightarrow \vec{u} = \vec{0} & \text{positive-definite} \end{align} $$ A normed vectorspace is a vectorspace equipped with a norm.

A unit vector is a vector with norm $1$ $$ \boxed{ \uvec{u} \equiv \q{\vec{u}}{\norm{\vec{u}}} } $$

p-Norm $$ \mm{\vec{u}}_p \equiv \pp{\sum_i \nn{u_i}^p }^{ \q{1}{p} } $$

Distance

The Euclidean distance $d : \setR^n \times \setR^n \rightarrow \setR^{+0}$ between points $\vec{u}, \vec{u}' \in \setR^n$ $$ d(\vec{u}, \vec{u}') = \norm{\vec{u} - \vec{u}'} $$ $d : V \times V \rightarrow \setR^{+0}$ $$ \begin{align} &d(\vec{u}, \vec{u}') = d(\vec{u}', \vec{u}) & \text{symmetric} \\ &d(\vec{u}, \vec{u}') = 0 \Leftrightarrow \vec{u} = \vec{u}' & \\ &d(\vec{u}, \vec{u}') \leq \norm{\vec{u}} + \norm{\vec{u}'} & \text{triangle inequality} \end{align} $$

Inner Product Space

Dot Product

$$ \mm{\vec{u} - \vec{v}}^2 = \mm{\vec{u}}^2 + \mm{\vec{v}}^2 - 2 \mm{\vec{u}} \mm{\vec{v}} \cos \theta $$ $$ \boxed{ \vec{u} \dot \vec{v} \equiv \mm{\vec{u}} \mm{\vec{v}} \cos \theta } $$ $$ \begin{split} \vec{u} \dot \vec{v} &= \q{1}{2} [\mm{\vec{u}}^2 + \mm{\vec{v}}^2 - \mm{\vec{u} - \vec{v}}^2] \\ &= \q{1}{2} \bb{\sum_i u_i^2 + \sum_i v_i^2 - \sum_i [u_i - v_i]^2} \\ &= \q{1}{2} \bb{\sum_i u_i^2 + \sum_i v_i^2 - \sum_i [u_i^2 + v_i^2 - 2 u_i v_i]} \\ &= \sum_i u_i v_i \end{split} $$ $\dot: V(\mathbb{R}^3) \times V(\mathbb{R}^3) \rightarrow \mathbb{R}$ $$ \boxed{ \vec{u} \dot \vec{v} = \sum_i u_i v_i } $$

Axioms

$\aa{\cdot,\cdot} : V \times V \rightarrow F$ $$ \begin{align} & \aa{\vec{u}, \vec{v}} = \aa{\vec{v}, \vec{u}}^\ast \\ & \aa{\vec{u}, \vec{v} \alpha + \vec{w} \beta} = \aa{\vec{u}, \vec{v}} \alpha + \aa{\vec{u}, \vec{w}} \beta \\ & \aa{\vec{u}, \vec{u}} \geq 0 \quad\text{and}\quad \aa{\vec{u}, \vec{u}} = 0 \Leftrightarrow \vec{u} = \vec{0} \end{align} $$

Complex Inner Product

$$ \boxed{ \aa{\vec{u}, \vec{v}} = \sum_i u_i^\ast v_i } $$

Induced Norm

$$ \boxed{ \norm{\vec{u}} = \aa{\vec{u}, \vec{u}}^{\q{1}{2}} } $$

Projection

$$ \fn{proj}(\vec{u}, \vec{v}) \equiv \q{\aa{\vec{u}, \vec{v}}}{\aa{\vec{u}, \vec{u}}} $$

Cauchy-Schwarz Inequality

$$ \begin{align*} 0 &\leq \mm{\vec{v} - \vec{u} \alpha}^2 \\ &= \aa{\vec{v} - \vec{u} \alpha, \vec{v} - \vec{u} \alpha} \\ &= \aa{\vec{v}, \vec{v}} - \aa{\vec{v}, \vec{u}} \alpha - \aa{\vec{u}, \vec{v}} \conj{\alpha} + \aa{\vec{u}, \vec{u}} \conj{\alpha} \alpha \\ &= \mm{\vec{v}}^2 - \aa{\vec{v}, \vec{u}} \bb{\q{\aa{\vec{u},\vec{v}}}{\aa{\vec{u},\vec{u}}}} - \aa{\vec{u}, \vec{v}} \conj{\bb{\q{\aa{\vec{u},\vec{v}}}{\aa{\vec{u},\vec{u}}}}} + \mm{\vec{u}}^2 \nn{\q{\aa{\vec{u},\vec{v}}}{\aa{\vec{u},\vec{u}}}}^2 \\ &= \mm{\vec{v}}^2 - \q{\nn{\aa{\vec{u},\vec{v}}}^2}{\mm{\vec{u}}^2} - \q{\nn{\aa{\vec{u},\vec{v}}}^2}{\mm{\vec{u}}^2} + \q{\nn{\aa{\vec{u},\vec{v}}}^2}{\mm{\vec{u}}^2} \\ &= \mm{\vec{v}}^2 - \q{\nn{\aa{\vec{u},\vec{v}}}^2}{\mm{\vec{u}}^2} \end{align*} $$ $$ \boxed{ \nn{\aa{\vec{u}, \vec{v}}} \leq \mm{\vec{u}} \mm{\vec{v}} } $$ the inequality is an equality only when either u and/or v are zero, or when u is a scalar multiple of v

Types of Bases

Orthonormalization

    Result{Orthonormal basis $\uvec{b}_i \in B$}
    for{$\vec{u}_i \in U$}{
        $\vec{b}_i = \vec{u} - \sum_{j<i} \uvec{b}_j \fn{proj}(\uvec{b}_j, \vec{u}_i)$
        $\uvec{b}_i = \vec{b}_i / \mm{\vec{b}_i}$
    }

$$ \aa{\uvec{b}_j, \uvec{b}_i} = \delta_{ji} $$ BE SURE TO GO OVER NUMERICALLY STABLE VERSION!!

Orthonormal Basis Expansion

$$ \vec{u} = \sum_i \uvec{e}_i u_i $$ $$ \aa{\uvec{e}_i, \uvec{e}_j} = \delta_{ij} $$ $$ \aa{\uvec{e}_i, \vec{u}} = \sum_j \aa{\uvec{e}_i, \uvec{e}_j} u_j = \sum_j \delta_{ij} u_j = u_i $$ $$ \vec{u} = \sum_i \uvec{e}_i \aa{\uvec{e}_i, \vec{u}} $$

Dual Basis

Exterior Algebra

Cross Product

$\cross: V(\mathbb{R}^3) \times V(\mathbb{R}^3) \rightarrow V(\mathbb{R}^3)$ $$ \boxed{ \vec{u} \cross \vec{v} \equiv \un \mm{\vec{u}} \mm{\vec{v}} \sin \theta } $$ $$ \begin{split} &= \mm{\vec{u} \cross \vec{v}}^2 \\ &= \mm{\un} \mm{\vec{u}}^2 \mm{\vec{v}}^2 \sin^2 \theta \\ &= \mm{\vec{u}}^2 \mm{\vec{v}}^2 [1 - \cos^2 \theta] \\ &= \mm{\vec{u}}^2 \mm{\vec{v}}^2 \bb{1 - \q{[\vec{u} \dot \vec{v}]^2}{\mm{\vec{u}}^2 \mm{\vec{v}}^2}} \\ &= \mm{\vec{u}}^2 \mm{\vec{v}}^2 - [\vec{u} \dot \vec{v}]^2 \\ &= \sum_i u_i^2 \sum_j v_j^2 - \sum_i u_i v_i \sum_j u_j v_j \\ &= \sum_i \sum_j [u_i^2 v_j^2 - u_i v_j u_j v_i] \\ &= [u_y^2 v_z^2\!-\!2 u_y v_z u_z v_y\!+\!u_z^2 v_y^2] \!+\![u_z^2 v_x^2\!-\!2 u_z v_x u_x v_z\!+\!u_x^2 v_z^2] \!+\![u_x^2 v_y^2\!-\!2 u_x v_y u_y v_x\!+\!u_y^2 v_x^2] \\ &= [u_y v_z - u_z v_y]^2 + [u_z v_x - u_x v_z]^2 + [u_x v_y - u_y v_x]^2 \end{split} $$ $$ \pm \vec{w} = \pm [\ux [u_y v_z - u_z v_y] + \uy [u_z v_x - u_x v_z] + \uz [u_x v_y - u_y v_x]] $$ $$ \vec{\vec{u}} \dot \vec{w} = u_x [u_y v_z - u_z v_y] + u_y [u_z v_x - u_x v_z] + u_z [u_x v_y - u_y v_x] = 0 $$ $$ \vec{\vec{v}} \dot \vec{w} = v_x [u_y v_z - u_z v_y] + v_y [u_z v_x - u_x v_z] + v_z [u_x v_y - u_y v_x] = 0 $$ $$ \boxed{ \vec{u} \cross \vec{v} = \ux [u_y v_z - u_z v_y] + \uy [u_z v_x - u_x v_z] + \uz [u_x v_y - u_y v_x] } $$ note sign ambiguity, which is resolved by the right hand rule, and makes the cross product a pseudovector

(pseudo)scalar triple product $$ \vec{u} \dot [\vec{v} \cross \vec{w}] = \vec{w} \dot [\vec{u} \cross \vec{v}] = \vec{v} \dot [\vec{w} \cross \vec{u}] $$ vector triple product $$ \vec{u} \cross [\vec{v} \cross \vec{w}] = \vec{v} [\vec{u} \dot \vec{w}] - \vec{w} [\vec{u} \dot \vec{v}] $$

Determinant

The determinate is the linear realization of length, area, volume, etc. It is uniquely characterized by

  1. multilinear in columns of matrix A
  2. alternating
  3. $\det(\mat{1}) = 1$

Consider a square matrix $\mat{A}$ $$ \mat{A} = \bb{\vec{A}_{\cdot 1} | \vec{A}_{\cdot 2} | \ldots | \vec{A}_{\cdot n}} $$ The determinant can be evaluated $$ \det(\mat{A}) = \det\pp{\bb{\vec{A}_{\cdot 1} | \vec{A}_{\cdot 2} | \ldots | \vec{A}_{\cdot n}}} $$ Expand each column vector using in the standard basis $$ \det(\mat{A}) = \det\pp{\bb{ \sum_{i_1} \uvec{e}_{i_1} A_{\cdot 1}^{i_1} \bigg| \sum_{i_2} \uvec{e}_{i_2} A_{\cdot 2}^{i_2} \bigg| \ldots \bigg| \sum_{i_n} \uvec{e}_{i_n} A_{\cdot n}^{i_n}}} $$ By multilinearity, this expression can be rewritten as $$ \det(\mat{A}) = \sum_{i_1} \sum_{i_2} \ldots \sum_{i_n} A_{\cdot 1}^{i_1} A_{\cdot 2}^{i_2} \ldots A_{\cdot n}^{i_n} \det\pp{\bb{ \uvec{e}_{i_1} \bigg| \uvec{e}_{i_2} \bigg| \ldots \bigg| \uvec{e}_{i_n}}} $$ These summations are simplified by applying the alternating property, which prescribes the determinant is $0$ when any columns are repeated. The only non-zero terms in the summation consist of permutations of the standard basis $$ \det(\mat{A}) = \sum_{\sigma \in S_n} A_{\cdot 1}^{\sigma(1)} A_{\cdot 2}^{\sigma(2)} \ldots A_{\cdot n}^{\sigma(n)} \det\pp{\bb{ \uvec{e}_{\sigma(1)} \bigg| \uvec{e}_{\sigma(2)} \bigg| \ldots \bigg| \uvec{e}_{\sigma(n)}}} $$ A consequence of the alternating property is that if any columns are exchanged, the sign of the determinant flips. This fact is used to rearrange the matrix into the identity matrix at the expense of multiplying by the sign of the permutation $$ \det(\mat{A}) = \sum_{\sigma \in S_n} A_{\cdot 1}^{\sigma(1)} A_{\cdot 2}^{\sigma(2)} \ldots A_{\cdot n}^{\sigma(n)} \fn{sgn}(\sigma) \det\pp{\mat{1}} $$ By definition, the determinant of the identity matrix equals $1$. Rearranging the sum results in the component form of the determinant. $$ \boxed{ \det(\mat{A}) = \sum_{\sigma \in S_n} \fn{sgn}(\sigma) \prod_i A_{\cdot i}^{\sigma(i)} } $$ This shows that the determinant exists, and is unique by construction.

The determinant applied to a $2 \times 2$ matrix is $$ \det(\mat{A}) = A_{11} A_{22} - A_{12} A_{21} $$ The scalar triple product is equivalent to the determinant of a $3 \times 3$ matrix $$ \det(\mat{A}) = \vec{A}_{\dot 1} \dot [\vec{A}_{\dot 2} \cross \vec{A}_{\dot 3}] $$ Why did I bring up $O(N)$ and $SO(N)$ here? Just because the determinant is used in the definition? $$ O(N) = \cc{\mat{A} \in \setR^{n \times n} | \mat{A}^T = \mat{A}^{-1}} $$ $$ SO(N) = \cc{\mat{A} \in \setR^{n \times n} | \det(\mat{A}) = 1} $$

Wedge Product

multivector (k-vector or k-blade)

wedge product is bilinear, associative, and alternating to capture linear version of measurement $$ \wedge_{p,q} : {\bigwedge}^p \times {\bigwedge}^q \rightarrow {\bigwedge}^{p+q} $$ $$ [\vec{b}_{i_1} \wedge \ldots \wedge \vec{b}_{i_p}] \wedge_{p,q} [\vec{b}_{i_{p+1}} \wedge \ldots \wedge \vec{b}_{i_{p+1}}] = \fn{sgn}(\sigma) [\vec{b}_{\sigma(i_1)} \wedge \ldots \wedge \vec{b}_{\sigma(i_{p+1})}] $$

Hodge Star

$$ \star : {\bigwedge}^p \rightarrow {\bigwedge}^{n - p} $$ $$ \det(\alpha \wedge \star \alpha) = 1 $$ Hodge star

Matrices

$$ \vec{u} = \sum_i \vec{b}_i u_i $$ $$ T(\vec{u}) = T\pp{\sum_i \vec{b}_i u_i} = \sum_i T(\vec{b}_i) u_i $$ $$ \vec{u}' = \sum_i \vec{b}'_i u_i $$ $$ \sum_j \vec{b}_j u'_j = \sum_i \bb{\sum_j \vec{b}_j b'_{ji}} u_i $$ $$ \sum_j \vec{b}_j u'_j = \sum_j \vec{b}_j \sum_i b'_{ji} u_i $$ $$ u'_j = \sum_i b'_{ji} u_i $$

\begingroup % keep the change local \setlength\arraycolsep{1pt} $$ \begin{cases} \begin{matrix} v_1 &= &A_{11} u_1 &+ &A_{12} u_2 &+ &\;\;\dots\;\; &+ &A_{1n} u_n \\ v_2 &= &A_{21} u_1 &+ &A_{22} u_2 &+ &\;\;\dots\;\; &+ &A_{2n} u_n \\ \vdots & &\vdots & &\vdots & & & &\vdots \\ v_m &= &A_{m1} u_1 &+ &A_{m2} u_2 &+ &\;\;\dots\;\; &+ &A_{mn} u_n \end{matrix} \end{cases} $$ \endgroup

$$ v_j = \sum_i A_{ji} u_i $$ $$ \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_m \end{bmatrix} = \begin{bmatrix} A_{11} &A_{12} &\;\;\dots\;\; &A_{1n} \\ A_{21} &A_{22} &\;\;\dots\;\; &A_{2n} \\ \vdots & \vdots& & \vdots \\ A_{m1} &A_{m2} &\;\;\dots\;\; &A_{mn} \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \\ \vdots \\ u_n \end{bmatrix} $$ $$ \vec{v} = \mat{A} \vec{u} $$ $$ \vec{v} = \mat{A}_{\dot 1} u_1 + \mat{A}_{\dot 2} u_2 + \dots + \mat{A}_{\dot 3} u_n $$ $$ w_k = \sum_j B_{kj} v_j = \sum_j B_{kj} \bb{\sum_i A_{ji} u_i} = \sum_i \bb{\sum_j B_{kj} A_{ji}} u_i = \sum_i C_{ki} u_i $$ $$ C_{ki} = \sum_j B_{kj} A_{ji} $$ $$ \mat{C} = \mat{B} \mat{A} $$

Transpose

$$ [A_{ij}]^T = [A_{ji}^T] $$ $$ [\mat{B} \mat{A}]^T = \mat{A}^T \mat{B}^T $$ $$ \mat{A}^H \equiv [\mat{A}^T]^\ast $$

Rank-Nullity

$\mat{A} \in \mathbb{F}^{m \times n}$ $$ \fn{colspace}(\mat{A}) \equiv \fn{span}(\cc{\vec{A}_{\dot i}}) $$ $$ \fn{rowspace}(\mat{A}) \equiv \fn{span}(\cc{\vec{A}_{i \dot}}) $$ $$ \fn{rank}(\mat{A}) \equiv \dim(\fn{colspace}(\mat{A})) = \dim(\fn{rowspace}(\mat{A})) $$ $$ \fn{nullspace}(\mat{A}) \equiv \cc{\vec{u} \in V | \mat{A} \vec{u} = \vec{0}} $$ $$ \fn{nullity}(\mat{A}) \equiv \dim(\fn{nullspace}(\mat{A})) $$ $$ n = \fn{rank}(\mat{A}) + \fn{nullity}(\mat{A}) $$ sometimes we want to solve a system of equations (cover invertible (square) and not)

Gaussian Elimination

identity transformation

inverse transformation $$ \mat{A} \mat{A}^{-1} = \mat{A}^{-1} \mat{A} = \mat{I} $$ $$ [\mat{A} \mat{B}]^{-1} = \mat{B}^{-1} \mat{A}^{-1} $$ $$ \mat{A}^{-1} = \q{\fn{adj}(\mat{A})}{\det(\mat{A})} $$

similar transformation $$ [\mat{A}^T]^{-1} = [\mat{A}^{-1}]^T $$

Inverse

Eigenvalue Problem

$$ \mat{A} \vec{u} = \lambda \vec{u} $$ $$ [\mat{A} - \lambda \mat{I}] \vec{u} = 0 $$ $$ \det(\mat{A} - \lambda \mat{I}) = 0 $$

Matrix Factorization

Eigendecomposition

$$ \mat{A} = \mat{Q} \mat{\Lambda} \mat{Q}^{-1} $$

psuedo inverse

Singular Value Decomposition

$$ \mat{A} = \mat{U} \mat{\Sigma} \mat{V}^H $$

Condition Number

$$ \kappa(\mat{A}) \equiv \q{\sigma_{max}(\mat{A})}{\sigma_{min}(\mat{A})} $$