Lecture Note: §1 Linear Equations in Linear Algebra

Published:

Last Update: 2026-03-22


Systems of linear equations lie at the heart of linear algebra, and this chapter uses them to introduce some of the central concepts of linear algebra in a simple and concrete setting. - Larger systems of equations occur in areas like engineering, science, and finance. In order to reliably extract information from multiple linear equations, we need linear algebra. - Generally, the complex scientific, or engineering problem can be solved by using linear algebra on linear equations.

§1.1 System of Linear Equations

Present a systematic method for solving systems of linear equations.

  • Definition
    • System of linear equations
    • Matrix Notation
      • The variable names are essentially irrelevant to the solution set, so matrix notation eliminates the need to even give them names!
  • Solutions of the linear system
    • A system of linear equations has
      1. no solution, or
      2. exactly one solution, or
      3. infinitely many solutions.
    • Existence and Uniqueness Questions
      1. Is the system consistent; that is, does at least one solution exist?
      2. If a solution exists, is it the only one; that is, is the solution unique?
  • Elementary Row Operations
    • Replacement: Replace one row by the sum of itself and a multiple of another row.
    • Scaling: Multiply all the entries in a row by a nonzero constant.
    • Row Interchange: Interchange two rows.
    • Row equivalent: if there is a sequence of elementary operations that transforms one matrix into the other.
  • Solving a Linear Systme (Gaussian Elimination)
    1. Write the augmented matrix of the system of linear equations.
    2. Use elementary row operations to reduce the augmented matrix to upper triangular form.
    3. Using back substitution (if a solution exists!), solve the equivalent system that corresponds to the row-reduced matrix.

§1.2 Row Reduction and Echelon Forms

This section refines the method of Section 1.1 into a row reduction algorithm that will enable us to analyze matrices and any system of linear equations.

  • Some Important Definition
    • Echelon Form (Row Echelon Form, REF)
    • Reduced Echelon Form (Reduced Row Echelon Form, RREF)
      • Uniqueness of the RREF. Each matrix is row equivalent to one and only one reduced echelon matrix.
    • Pivot Position and Pivot Column. Tell from the echelon form of a matrix $A$.
  • The Row Reduction Algorithm (Very Important!)
    1. Begin with the leftmost nonzero column. This is a pivot column. The pivot position is at the top.
    2. Select a nonzero entry in the pivot column as a pivot. If necessary, interchange rows to move this entry into the pivot position.
    3. Use row replacement operations to create zeros in all positions below the pivot.
    4. Cover (or ignore) the row containing the pivot position and cover all rows, if any, above it. Apply steps 1 to 3 to the submatrix that remains. Repeat the process until no more nonzero rows to modify.
    5. Beginning with the rightmost pivot and working upward and to the left, create zeros above each pivot. If a pivot is not $1$, make it $1$ by a scaling operation.
  • Parametric Descriptions of Solution Sets
    • basic variable: the variables corresponding to pivot columns in the matrix.
    • free variable: the other variables.
    • Express the solution set of a system of linear equations as a parametric form, where each basic variable is expressed explicitly in terms of the free variables.
  • Existence and Uniqueness Theorem
    • A linear system is consistent if and only if the rightmost column of the augmented matrix is not a pivot column - that is, if and only if an echelon form of the augmented matrix has no row of the form \(\begin{bmatrix} 0 & \cdots & 0 & b \end{bmatrix} \quad \text{ with } b \text{ nonzero }\)
    • If a linear system is consistent, then the solution set contains either
      1. a unique solution, when there are no free variables,
      2. infinitely many solutions, when there is at least one free variable
  • Using Row Reduction to Solve a Linear Systems (Gaussian-Jordan Elimination)
    1. Write the augmented matrix of the system of linear equations.
    2. Use elementary row operations to reduce the augmented matrix to reduced row echelon form.
    3. If the resulting system is consistent, solve for the leading variables in terms of any remaining free variables.

§1.3 Vector Equations

  • Definition of $\mathbb{R}^{n}$ vectors.
    • Linear operations: sum and scalar multiple.
  • Linear Combinations. For vectors $\vec{v}_1, \vec{v}_2,\ldots,\vec{v}_p \in \mathbb{R}^{n}$ and given scalars $c_1, c_2, \ldots c_p$, the linear combination \(\vec{y} =c_1\vec{v}_1+c_2\vec{v}_2+\cdots+c_p\vec{v}_p.\)

  • Vector Equation: $x_1\vec{a}_1 + x_2\vec{a}_2 + \cdots + x_n\vec{a}_n=\vec{b}$
    • Has the same solution set as the linear system whose augmented matrix is $ \begin{bmatrix} \vec{a}_1 & \vec{a}_2 & \cdots & \vec{a}_n & \vec{b} \end{bmatrix} $
  • Span. Span${\vec{v}_1,\dots,\vec{v}_p}$ is the collection of all vectors that can be written in the form \(c_1\vec{v}_1+c_2\vec{v}_2 + \cdots + c_p\vec{v}_p.\)

§1.4 The Matrix Equation $A\vec{x} = \vec{b}$

  • Matrix-Vector Product. If $A$ is an $m\times n$ matrix, with columns $\vec{a}_1,\dots,\vec{a}_n$, and if $\vec{x}$ is in $\mathbb{R}^{n}$, then \(A\vec{x}=\begin{bmatrix} \vec{a}_1 & \vec{a}_2 & \cdots & \vec{a}_n \end{bmatrix}\begin{bmatrix} x_1\\ \vdots\\ x_n \end{bmatrix}=x_1\vec{a}_1+\cdots+x_n\vec{a}_n\)
    • It is a linear operation.
    • Matrix transformation, $\vec{x} \mapsto A\vec{x}$. The matrix $A$ as an object that “acts” on a vector $\vec{x}$ by multiplication to produce a new vector called $A\vec{x}$.
  • Matrix Equation. The matrix equation \(A\vec{x}=\vec{b}\) has the same solution set as the vector equation \(x_1\vec{a}_1+\cdots+x_n\vec{a}_n=\vec{b}\) which, in turn, has the same solution set as the system of linear equations whose augmented matrix is \(\begin{bmatrix} \vec{a}_1 & \vec{a}_2 & \cdots & \vec{a}_n & \vec{b} \end{bmatrix}\)

  • An Important Theorem. Let $A$ be an $m\times n$ matrix. Then the following statements are logically equivalent.
    • $\forall \vec{b}\in\mathbb{R}^{m}$, the equation $A\vec{x}=\vec{b}$ has (at least) a solution.
    • Each $\vec{b}\in\mathbb{R}^{m}$ is a linear combination of the columns of $A$.
    • The columns of $A$ span $\mathbb{R}^{m}$
    • $A$ has a pivot position in every row.
  • Row-Vector Rule for Computing $A\vec{x}$.

§1.5 Solution Sets of Linear Systems

  • Homogeneous Linear Systems $A\vec{x} = \vec{0}$
    • Whether there exists a nonzero solution?
    • The homogeneous equation $A\vec{x}= \vec{0}$ has a nontrivial solution if and only if the equation has at least one free variable.
  • Parametric Vector Form. Whenever a solution set is described explicitly with vectors, the solution is in parametric vector form

  • Nonhomogeneous Systems $A\vec{x} = \vec{b}$
    • Suppose the equation $A\vec{x} = \vec{b}$ is consistent for some given $\vec{b}$, and let $\vec{p}$ be a solution. Then the solution set of $A\vec{x} = \vec{b}$ is the set of all vectors of the form $\vec{w} = \vec{p} + \vec{v}_h$, where $\vec{v}_h$ is any solution of the homogeneous equation $A\vec{x} = \vec{0}$.
  • Writing a solution set in parametric vector form
    1. Row reduce the augmented matrix to reduced echelon form.
    2. Express each basic variable in terms of any free variables appearing in an equation.
    3. Write a typical solution $\vec{x}$ as a vector whose entries depend on the free variables, if any.
    4. Decompose $\vec{x}$ into a linear combination of vectors using the free variables as parameters.