Vectors in ℝⁿ
Operations
| Property | Statement |
|---|---|
| Addition | Component-wise |
| Scalar mult | Scale each component |
| Dot product | \(\mathbf{u}\cdot\mathbf{v}=\sum u_iv_i\) |
| Norm | \(\|\mathbf{v}\|=\sqrt{\mathbf{v}\cdot\mathbf{v}}\) |
Examples
Example 1. Dot product of (1,2,3) and (4,5,6).
Solution. 4+10+18=32.
In Depth
Vectors in \(\mathbb{R}^n\) are ordered \(n\)-tuples of real numbers. They support addition (componentwise) and scalar multiplication, making \(\mathbb{R}^n\) a vector space. The standard basis vectors \(\mathbf{e}_1,\ldots,\mathbf{e}_n\) (with a 1 in position \(i\) and 0s elsewhere) span \(\mathbb{R}^n\).
The Euclidean norm \(\|\mathbf{v}\|=\sqrt{\sum v_i^2}\) measures length. The dot product \(\mathbf{u}\cdot\mathbf{v}=\sum u_iv_i=\|\mathbf{u}\|\|\mathbf{v}\|\cos\theta\) encodes both length and angle. The cross product in \(\mathbb{R}^3\) gives a vector perpendicular to both inputs with magnitude equal to the parallelogram area.
Linear combinations \(c_1\mathbf{v}_1+\cdots+c_k\mathbf{v}_k\) and their spans are the building blocks of linear algebra. A set of vectors is linearly independent if no vector is a linear combination of the others — equivalently, \(\sum c_i\mathbf{v}_i=\mathbf{0}\) implies all \(c_i=0\).
Projections: the projection of \(\mathbf{b}\) onto \(\mathbf{a}\) is \(\text{proj}_{\mathbf{a}}\mathbf{b}=(\mathbf{a}\cdot\mathbf{b}/\|\mathbf{a}\|^2)\mathbf{a}\). The projection onto a subspace spanned by columns of \(A\) is \(P=A(A^TA)^{-1}A^T\). Projections satisfy \(P^2=P\) (idempotent) and \(P^T=P\) (symmetric).
In machine learning, data points are vectors in \(\mathbb{R}^n\) (feature vectors). Distance metrics (Euclidean, cosine similarity) measure similarity. Dimensionality reduction (PCA, t-SNE) projects high-dimensional vectors to lower-dimensional spaces while preserving structure.
Key Properties & Applications
The \(\ell^p\) norms \(\|\mathbf{v}\|_p=(\sum|v_i|^p)^{1/p}\) generalize the Euclidean norm (\(p=2\)). The \(\ell^1\) norm (Manhattan distance) promotes sparsity in optimization (LASSO). The \(\ell^\infty\) norm \(\max|v_i|\) measures the largest component. All \(\ell^p\) norms on \(\mathbb{R}^n\) are equivalent.
The outer product \(\mathbf{u}\mathbf{v}^T\) is an \(m\times n\) rank-1 matrix. Every matrix is a sum of rank-1 matrices (its SVD). The outer product is used in neural networks (weight updates in gradient descent are outer products of activations and error signals).
Concentration of measure: in high dimensions, most of the volume of a unit ball is near its surface, and most points on the surface are near the equator. Random vectors in \(\mathbb{R}^n\) are nearly orthogonal with high probability. This 'curse of dimensionality' affects nearest-neighbor search and density estimation.
Further Reading & Context
The study of vectors in rn connects to many areas of mathematics and its applications. Understanding the foundational definitions and theorems provides the basis for advanced work in analysis, algebra, and applied mathematics.
Historical development: most mathematical concepts evolved over centuries, with contributions from mathematicians across many cultures. The modern axiomatic treatment provides rigor, while computational tools enable practical application.
In modern mathematics, this topic appears in graduate courses and research across pure and applied mathematics. Connections to computer science, physics, and engineering make it a versatile and important area of study. Mastery of the core results and techniques opens doors to research in number theory, analysis, geometry, and beyond.
Recommended next steps: work through the standard theorems with full proofs, explore the connections to related topics listed above, and practice with a variety of problems ranging from computational exercises to theoretical proofs. The interplay between different areas of mathematics is one of the subject's greatest rewards.