## Determining Definiteness: Two Paths
For a symmetric matrix $A$, two classical methods determine whether it is positive definite or negative definite.
### The Eigenvalue Criterion
**$A$ is positive definite when all eigenvalues are positive, and negative definite when all eigenvalues are negative.** This method reveals the spectrum of $A$—the principal axes and stretching factors of the ellipsoid that the quadratic form defines. Eigenvalue algorithms have matured into robust numerical tools, making this approach reliable for computation.
### Sylvester's Criterion
This method examines the **leading principal minors**—the determinants of the upper-left $1 \times 1$, $2 \times 2$, ..., $n \times n$ submatrices, denoted $D_1, D_2, \ldots, D_n$.
- **Positive definiteness** requires all minors to be positive: $D_k > 0$ for all $k$.
- **Negative definiteness** requires the minors to alternate in sign, starting negative: $D_1 < 0$, $D_2 > 0$, $D_3 < 0$, and so on.
Sylvester's criterion offers early termination—if the first minor fails, you stop. However, it can suffer numerical instability when computing determinants of large submatrices.
---
## Quadratic Forms and Matrix Definiteness
A subtle but important distinction: the quadratic form $x^T A x$ and the matrix $A$ do **not** always share the same definiteness.
### The Symmetric Part Theorem
For any matrix $A$, the quadratic form $x^T A x$ produces a scalar. Since a scalar equals its own transpose:
$x^T A x = x^T A^T x$
Adding these yields:
$2(x^T A x) = x^T (A + A^T) x$
Therefore:
$x^T A x = x^T \left( \frac{A + A^T}{2} \right) x$
**The quadratic form sees only the symmetric part of $A$.** The skew-symmetric part $\frac{A - A^T}{2}$ contributes nothing—it vanishes identically in the quadratic form.
When $A$ is symmetric, the definiteness of $x^T A x$ and $A$ coincide perfectly. When $A$ is not symmetric, the quadratic form's definiteness belongs to $\frac{A + A^T}{2}$, not to $A$ itself. This is why definiteness, as a concept, attaches naturally to symmetric matrices or to quadratic forms—but not to arbitrary square matrices.
---
## Why Definiteness Matters
Definiteness answers a question that geometry, physics, and optimization all ask: **does this quadratic form have a unique bottom?**
A positive definite matrix $A$ makes $x^T A x = 0$ only at the origin, growing positive everywhere else. The level sets form closed ellipsoids nested around the origin—the origin sits at the bottom of a bowl. A negative definite matrix inverts this: the origin becomes a peak.
### Optimization
The Hessian matrix of second partial derivatives, evaluated at a critical point, determines local shape:
- **Positive definite Hessian** → local minimum (the bowl)
- **Negative definite Hessian** → local maximum (the peak)
- **Indefinite Hessian** → saddle point
### Inner Products and Geometry
A positive definite matrix $A$ creates an inner product:
$\langle x, y \rangle = x^T A y$
Definiteness guarantees $\langle x, x \rangle > 0$ for nonzero $x$—**length remains meaningful**. Orthogonality, projection, and least squares all rest on this foundation.
### Statistics
Covariance matrices must be positive semi-definite because variance cannot go negative. In principal component analysis, the eigenvalues of the covariance matrix—guaranteed non-negative by semi-definiteness—measure how much variance each principal axis captures.
### Physics
Energy functions often take quadratic form. A system rests in stable equilibrium when small perturbations cost energy—when the potential surface curves upward in every direction. Positive definiteness encodes stability; indefiniteness signals instability.
### Numerical Computation
Positive definite systems admit **Cholesky decomposition**:
$A = LL^T$
where $L$ is lower triangular. This halves the work of Gaussian elimination and maintains numerical stability. Iterative methods like conjugate gradient converge reliably when positive definiteness holds.
---
## Summary
Definiteness is not an abstract classification. It is the signature of **whether a center holds**—whether a point is a true minimum, whether lengths make sense, whether equilibrium is stable, whether algorithms converge. The eigenvalues' signs encode this, and that encoding propagates through mathematics wherever quadratic structure appears.