# (draft) Conjugate Gradient

An optimization method for quadratic optimization problems of the form \[ \text{minimize}_\mathbf{x} f(\mathbf{x}) = \frac{1}{2}\mathbf{x}^T\mathbf{A}\mathbf{x} + \mathbf{b}^T\mathbf{x}+c\]

Where \(\mathbf{A}\) is symmetric positive definite, implying \(f\) has a unique local minimum. As an iterative method, it makes use of mutually conjugate vectors with respect to \(\mathbf{A}\), i.e. vectors that satisfy \(\langle \mathbf{d}_i,\mathbf{d}_j\rangle_\mathbf{A} := \mathbf{d}_i^T\mathbf{A}\mathbf{d}_j=0 \: \forall \: i\neq j\) where this is inner product due to the symmetric positive definiteness of \(\mathbf{A}\).

As a descent direction based approach, the first direction

## Thoughts

- (Kochenderfer and Wheeler 2019) is a bit vague on the difference between conjugate gradient and non-linear conjugate gradient.