Lecture Note: §3 Determinants

Published:

Last Update: 2026-04-09


Determinants emerged historically before matrices—a curious contrast to modern linear algebra curricula, which introduce matrices first. Determinants arose independently of matrices to solve practical problems, and their theory was fully developed nearly two centuries before matrices became a subject of independent study.

Indeed, from a functional perspective, determinants are far more indispensable and foundational in calculus than in linear algebra, where they serve primarily as a calculational tool rather than an essential structural component.


§3.1 Introduction to Determinants

  • It is worth noting that
    • Three definition of determinats
      • Add up $n!$ terms (times $1$ or $-1$)
      • Multiply the $n$ pivots (times $1$ or $-1$)
      • Combine $n$ smaller determinants (times $1$ or $-1$)
    • Given $n$ vectors $\vec{v}_1,\vec{v}_2,\ldots, \vec{v}_n$ in $\mathbb{R}^n$, the $n$-dimensional volume of the parallelepiped determined by these vectors.
  • Recursive Definition of Determinant ($A$: $n\times n$ matrix) \(\det A =a_{11}\det A_{11}-a_{12}\det A_{12} +(-1)^{1+n}a_{1n}\det A_{1n} = \sum\limits_{j=1}^{n}{(-1)^{1+j}}a_{1j}\det A_{1j}\)
    • $A_{ij}$ denote the submatrix formed by deleting the $i$th row and $j$th column of $A$
    • Denote the cofactor $C_{ij}=(-1)^{i+j}\det A_{ij}$, the cofactor expansion across the first row of $A$, \(\det A=a_{11}C_{11}+a_{12}C_{12}+\cdots+a_{1n}C_{1n}\)
  • Cofactor Expansion
    • The determinant of an $n \times n$ matrix $A$ can be computed by a cofactor expansion across any row or {down any column}. The cofactor expansion across the $i$th row is \(\det A=a_{i1}C_{i1}+a_{i2}C_{i2}+\cdots+a_{in}C_{in}\) The cofactor expansion down the $j$th column is \(\det A = a_{1j}C_{1j} +a_{2j}C_{2j}+\cdots +a_{nj}C_{nj}\)
  • Theorem
    • If $A$ is a triangular matrix, then $\det A$ is the product of the entries on the main diagonal of $A$. \(\det A = a_{11} a_{22} a_{33} \cdots a_{nn}\)
    • Note: In general (that is, unless the matrix is triangular or has some other special form), computing a determinant by cofactor expansion is not efficient.

§3.2 Properties of Determinants

  • Theorem: Row Operations Let $A$ be a square matrix
    1. If a multiple of one row of $A$ is added to another row to produce a matrix $B$, then $\det B=\det A$.
    2. If two rows of $A$ are interchanged to produce $B$, then $\det B=-\det A$.
    3. If one row of $A$ is multiplied by $k$ to produce $B$, then $\det B=k\det A$.
  • Row Operations (Matrix Form) If $A$ is an $n\times n$ matrix and $E$ is an $n\times n$ elementary matrix, then \(\det E A=(\det E)(\det A)\) where \(\det E=\begin{cases} 1 & \text{ if } E \text{ is a row replacement}\\ -1 & \text{ if } E \text{ is an interchange}\\ k & \text{ if } E \text{ is a scale by } k \end{cases}\)
    • Suppose a square matrix $A$ has been reduced to an echelon form $U$ by row replacements and row interchanges (without scaling). \(\det A = (-1)^{r} \det U\)
  • Invertible Matrix Theorem. A square matrix $A$ is invertible if and only if $\det A\ne 0$.

  • Determinants of Transpose. If $A$ is an $n\times n$ matrix, then $\det A^{T}=\det A$.

  • Column Operations Let $A$ be a square matrix
    1. If a multiple of one column of $A$ is added to another column to produce a matrix $B$, then $\det B=\det A$.
    2. If two column of $A$ are interchanged to produce $B$, then $\det B=-\det A$.
    3. If one column of $A$ is multiplied by $k$ to produce $B$, then $\det B=k\det A$.
  • Multiplicative Property. If $A$ and $B$ are $n\times n$ matrices, then $\det AB=\det A\det B$.

  • A Linearity Property Denote $A_i(\vec{x})=\begin{bmatrix}\vec{a}1 \ \cdots \ \vec{a}{i-1} \ \vec{x} \ \vec{a}_{i+1}\ \cdots \ \vec{a}_n\end{bmatrix}$, and define a transformation $T$ from $\mathbb{R}^{n}$ to $\mathbb{R}$ by $ T(\vec{x}) = \det A_i{\vec{x}}$. Then $T(\vec{x})$ is a linear transformation, that is
    • $T(c\vec{x}) = cT(\vec{x})$
    • $T(\vec{u}+\vec{v}) = T(\vec{u}) + T(\vec{v})$

§3.3 Cramer’s Rule

  • Cramer’s Rule
    • Let $A$ be an invertible $n\times n$ matrix. For any $\vec{b}$ in $\mathbb{R}^{n}$, the unique solution $\vec{x}$ of $A\vec{x}=\vec{b}$ has entries given by \(x_i=\frac{\det A_i(\vec{b})}{\det A}, \ \ i = 1,2,\cdots,n\)
    • A Formula for $A^{-1}$ \(A^{-1}= \frac{1}{\det A}\operatorname{adj} A = \frac{1}{\det A}\begin{bmatrix} C_{11} & C_{21} & \cdots & C_{n1} \\ C_{12} & C_{22} & \cdots & C_{n2} \\ \vdots & \vdots & & \vdots\\ C_{1n} & C_{2n} & \cdots & C_{nn} \\ \end{bmatrix}\) The matrix of cofactors is called the adjugate (or classical adjoint) of $A$, denoted by $\operatorname{adj} A$.
      • Note: the identity equation $A \operatorname{adj}A = \det A\cdot I$ is hold for all $n\times n$ matrix $A$.
  • Geometrical Interpretation of Determinants
    • Given $n$ vectors $\vec{v}_1,\vec{v}_2,\ldots, \vec{v}_n$ in $\mathbb{R}^n$, the $n$-dimensional volume of the parallelepiped determined by these vectors.