Linear Systems

Gaussian Elimination

Row-reduce the augmented matrix [A|b] to row echelon form. Back-substitute to find solutions. Three cases: unique, infinite, or no solution.

Examples

Example 1. Solve x+y=3, 2x−y=0.

Solution. Add: 3x=3 → x=1, y=2.

In Depth

A linear system \(A\mathbf{x}=\mathbf{b}\) has three possible outcomes: no solution (inconsistent), exactly one solution (unique), or infinitely many solutions. The augmented matrix \([A|\mathbf{b}]\) row-reduced to echelon form reveals which case applies and gives the solution(s).

Gaussian elimination transforms \(A\) to upper triangular form using row operations (swap, scale, add multiple of one row to another). Back substitution then solves the triangular system. With partial pivoting, this is numerically stable and costs \(O(n^3/3)\) flops.

Cramer's rule expresses each solution component as a ratio of determinants: \(x_i=\det(A_i)/\det(A)\) where \(A_i\) replaces column \(i\) of \(A\) with \(\mathbf{b}\). While theoretically elegant, it is computationally impractical for large systems (\(O(n!)\) vs \(O(n^3)\) for Gaussian elimination).

Iterative methods (Jacobi, Gauss–Seidel, conjugate gradient) solve large sparse systems without forming \(A^{-1}\). The conjugate gradient method solves symmetric positive definite systems in at most \(n\) steps and converges faster when eigenvalues are clustered. Preconditioning improves convergence.

Condition number \(\kappa(A)=\|A\|\|A^{-1}\|\) measures sensitivity: a relative perturbation \(\delta\mathbf{b}/\|\mathbf{b}\|\) in the right-hand side causes a relative error \(\leq\kappa(A)\cdot\delta\mathbf{b}/\|\mathbf{b}\|\) in the solution. Ill-conditioned systems (large \(\kappa\)) require careful numerical treatment.

Key Properties & Applications

Direct methods (LU, Cholesky, QR) solve \(A\mathbf{x}=\mathbf{b}\) exactly in \(O(n^3)\) operations. Iterative methods (CG, GMRES, BiCGSTAB) are preferred for large sparse systems: they exploit sparsity and can achieve high accuracy in far fewer than \(n\) iterations with good preconditioning.

The conjugate gradient method minimizes the quadratic \(\frac{1}{2}\mathbf{x}^TA\mathbf{x}-\mathbf{b}^T\mathbf{x}\) over Krylov subspaces \(\mathcal{K}_k=\text{span}\{\mathbf{r}_0,A\mathbf{r}_0,\ldots,A^{k-1}\mathbf{r}_0\}\). It converges in at most \(n\) steps and faster when eigenvalues are clustered.

Overdetermined systems (more equations than unknowns) are solved by least squares. Underdetermined systems (fewer equations than unknowns) have infinitely many solutions; the minimum-norm solution is \(\mathbf{x}=A^T(AA^T)^{-1}\mathbf{b}\) (right pseudoinverse).

Further Reading & Context

The study of linear systems connects to many areas of mathematics and its applications. Understanding the foundational definitions and theorems provides the basis for advanced work in analysis, algebra, and applied mathematics.

Historical development: most mathematical concepts evolved over centuries, with contributions from mathematicians across many cultures. The modern axiomatic treatment provides rigor, while computational tools enable practical application.

In modern mathematics, this topic appears in graduate courses and research across pure and applied mathematics. Connections to computer science, physics, and engineering make it a versatile and important area of study. Mastery of the core results and techniques opens doors to research in number theory, analysis, geometry, and beyond.

Recommended next steps: work through the standard theorems with full proofs, explore the connections to related topics listed above, and practice with a variety of problems ranging from computational exercises to theoretical proofs. The interplay between different areas of mathematics is one of the subject's greatest rewards.

Deep Dive: Linear Systems

This lesson extends core ideas for linear systems with rigorous reasoning, edge-case checks, and application framing in linear algebra.

Practice Set

Practice. Derive one main result on this page and validate with a numeric or geometric check.

Goal. Confirm assumptions, transformation steps, and final interpretation.

References & Editorial Notes

  • Stewart, Calculus.
  • Strang, Introduction to Linear Algebra.
  • Apostol, Mathematical Analysis.

Last editorial review: 2026-04-14.