Bayesian Optimal Experimental Design (BOED)

A model-based approach to choose some design \(d \in \mathcal{D}\) which maximizes the information gained about some model parameters \(\theta\) from the outcome \(y\) after performing experiment \(d\). Given a predictive model \(\mathbb{P}(y|\theta,d)\) and prior \(\mathbb{P}(\theta)\), the objective is to choose a design \(d\) that yields the greatest reduction in our uncertainty - which as a function of \(d,y\) can be formulated as \[ \text{IG}(y,d) = \mathbb{H}[\mathbb{P}(\theta)] - \mathbb{H}[\mathbb{P}(\theta|y,d)]\]

Where \(\mathbb{H}\) is the entropy operator*. This function is called the Information Gain, and integrating out \(y\) gives us the Expected Information Gain (EIG). \[ \text{EIG}(d) = \mathbb{E}_{\mathbb{P}(y,\theta|d)}\big[\log \frac{\mathbb{P}(y|\theta,d)}{\mathbb{P}(y|d)}\big] \]

Which is the same as the mutual information between \(\theta\) and \(y\) given \(d\). The Bayes optimal design \(d^*\) is then defined as: \[ d^* = argmax_{d\in \mathcal{D}} \text{EIG}(d) \]

Thoughts

  • A bit vague here to be honest.

Author: Nazaal

Created: 2022-03-13 Sun 21:44

Validate