Lecture Note: §1 Linear Equations in Linear Algebra
Published:
Last Update: 2026-03-22
Systems of linear equations lie at the heart of linear algebra, and this chapter uses them to introduce some of the central concepts of linear algebra in a simple and concrete setting. - Larger systems of equations occur in areas like engineering, science, and finance. In order to reliably extract information from multiple linear equations, we need linear algebra. - Generally, the complex scientific, or engineering problem can be solved by using linear algebra on linear equations.
§1.1 System of Linear Equations
Present a systematic method for solving systems of linear equations.
- Definition
- System of linear equations
- Matrix Notation
- The variable names are essentially irrelevant to the solution set, so matrix notation eliminates the need to even give them names!
- Solutions of the linear system
- A system of linear equations has
- no solution, or
- exactly one solution, or
- infinitely many solutions.
- Existence and Uniqueness Questions
- Is the system consistent; that is, does at least one solution exist?
- If a solution exists, is it the only one; that is, is the solution unique?
- A system of linear equations has
- Elementary Row Operations
- Replacement: Replace one row by the sum of itself and a multiple of another row.
- Scaling: Multiply all the entries in a row by a nonzero constant.
- Row Interchange: Interchange two rows.
- Row equivalent: if there is a sequence of elementary operations that transforms one matrix into the other.
- Solving a Linear Systme (Gaussian Elimination)
- Write the augmented matrix of the system of linear equations.
- Use elementary row operations to reduce the augmented matrix to upper triangular form.
- Using back substitution (if a solution exists!), solve the equivalent system that corresponds to the row-reduced matrix.
§1.2 Row Reduction and Echelon Forms
This section refines the method of Section 1.1 into a row reduction algorithm that will enable us to analyze matrices and any system of linear equations.
- Some Important Definition
- Echelon Form (Row Echelon Form, REF)
- Reduced Echelon Form (Reduced Row Echelon Form, RREF)
- Uniqueness of the RREF. Each matrix is row equivalent to one and only one reduced echelon matrix.
- Pivot Position and Pivot Column. Tell from the echelon form of a matrix $A$.
- The Row Reduction Algorithm (Very Important!)
- Begin with the leftmost nonzero column. This is a pivot column. The pivot position is at the top.
- Select a nonzero entry in the pivot column as a pivot. If necessary, interchange rows to move this entry into the pivot position.
- Use row replacement operations to create zeros in all positions below the pivot.
- Cover (or ignore) the row containing the pivot position and cover all rows, if any, above it. Apply steps 1 to 3 to the submatrix that remains. Repeat the process until no more nonzero rows to modify.
- Beginning with the rightmost pivot and working upward and to the left, create zeros above each pivot. If a pivot is not $1$, make it $1$ by a scaling operation.
- Parametric Descriptions of Solution Sets
- basic variable: the variables corresponding to pivot columns in the matrix.
- free variable: the other variables.
- Express the solution set of a system of linear equations as a parametric form, where each basic variable is expressed explicitly in terms of the free variables.
- Existence and Uniqueness Theorem
- A linear system is consistent if and only if the rightmost column of the augmented matrix is not a pivot column - that is, if and only if an echelon form of the augmented matrix has no row of the form \(\begin{bmatrix} 0 & \cdots & 0 & b \end{bmatrix} \quad \text{ with } b \text{ nonzero }\)
- If a linear system is consistent, then the solution set contains either
- a unique solution, when there are no free variables,
- infinitely many solutions, when there is at least one free variable
- Using Row Reduction to Solve a Linear Systems (Gaussian-Jordan Elimination)
- Write the augmented matrix of the system of linear equations.
- Use elementary row operations to reduce the augmented matrix to reduced row echelon form.
- If the resulting system is consistent, solve for the leading variables in terms of any remaining free variables.
§1.3 Vector Equations
- Definition of $\mathbb{R}^{n}$ vectors.
- Linear operations: sum and scalar multiple.
Linear Combinations. For vectors $\vec{v}_1, \vec{v}_2,\ldots,\vec{v}_p \in \mathbb{R}^{n}$ and given scalars $c_1, c_2, \ldots c_p$, the linear combination \(\vec{y} =c_1\vec{v}_1+c_2\vec{v}_2+\cdots+c_p\vec{v}_p.\)
- Vector Equation: $x_1\vec{a}_1 + x_2\vec{a}_2 + \cdots + x_n\vec{a}_n=\vec{b}$
- Has the same solution set as the linear system whose augmented matrix is $ \begin{bmatrix} \vec{a}_1 & \vec{a}_2 & \cdots & \vec{a}_n & \vec{b} \end{bmatrix} $
- Span. Span${\vec{v}_1,\dots,\vec{v}_p}$ is the collection of all vectors that can be written in the form \(c_1\vec{v}_1+c_2\vec{v}_2 + \cdots + c_p\vec{v}_p.\)
§1.4 The Matrix Equation $A\vec{x} = \vec{b}$
- Matrix-Vector Product. If $A$ is an $m\times n$ matrix, with columns $\vec{a}_1,\dots,\vec{a}_n$, and if $\vec{x}$ is in $\mathbb{R}^{n}$, then \(A\vec{x}=\begin{bmatrix} \vec{a}_1 & \vec{a}_2 & \cdots & \vec{a}_n \end{bmatrix}\begin{bmatrix} x_1\\ \vdots\\ x_n \end{bmatrix}=x_1\vec{a}_1+\cdots+x_n\vec{a}_n\)
- It is a linear operation.
- Matrix transformation, $\vec{x} \mapsto A\vec{x}$. The matrix $A$ as an object that “acts” on a vector $\vec{x}$ by multiplication to produce a new vector called $A\vec{x}$.
Matrix Equation. The matrix equation \(A\vec{x}=\vec{b}\) has the same solution set as the vector equation \(x_1\vec{a}_1+\cdots+x_n\vec{a}_n=\vec{b}\) which, in turn, has the same solution set as the system of linear equations whose augmented matrix is \(\begin{bmatrix} \vec{a}_1 & \vec{a}_2 & \cdots & \vec{a}_n & \vec{b} \end{bmatrix}\)
- An Important Theorem. Let $A$ be an $m\times n$ matrix. Then the following statements are logically equivalent.
- $\forall \vec{b}\in\mathbb{R}^{m}$, the equation $A\vec{x}=\vec{b}$ has (at least) a solution.
- Each $\vec{b}\in\mathbb{R}^{m}$ is a linear combination of the columns of $A$.
- The columns of $A$ span $\mathbb{R}^{m}$
- $A$ has a pivot position in every row.
- Row-Vector Rule for Computing $A\vec{x}$.
§1.5 Solution Sets of Linear Systems
- Homogeneous Linear Systems $A\vec{x} = \vec{0}$
- Whether there exists a nonzero solution?
- The homogeneous equation $A\vec{x}= \vec{0}$ has a nontrivial solution if and only if the equation has at least one free variable.
Parametric Vector Form. Whenever a solution set is described explicitly with vectors, the solution is in parametric vector form
- Nonhomogeneous Systems $A\vec{x} = \vec{b}$
- Suppose the equation $A\vec{x} = \vec{b}$ is consistent for some given $\vec{b}$, and let $\vec{p}$ be a solution. Then the solution set of $A\vec{x} = \vec{b}$ is the set of all vectors of the form $\vec{w} = \vec{p} + \vec{v}_h$, where $\vec{v}_h$ is any solution of the homogeneous equation $A\vec{x} = \vec{0}$.
- Writing a solution set in parametric vector form
- Row reduce the augmented matrix to reduced echelon form.
- Express each basic variable in terms of any free variables appearing in an equation.
- Write a typical solution $\vec{x}$ as a vector whose entries depend on the free variables, if any.
- Decompose $\vec{x}$ into a linear combination of vectors using the free variables as parameters.
