## SRN – Bayesian regret of Thompson Sampling

Hello 2021! This is my second Sunday Reading Notes post of the year. This week I have the chance of reading about Thompson Sampling, a topic I have been interested in since 2018. The paper “On Thompson Sampling for Smoother-than-Lipschitz Bandits” by Grant and Leslie, from Lancaster University, concerns the bounds on the regret for Thompson Sampling for continuum bandits. The regret bound is in terms of eluder dimension and ball-width function of the function class $\mathcal{F}$ of the reward function $f$.

Both eluder dimension and ball-width function are new concepts to me. I was able to read more about them in Russo and Roy (2014) and was intrigued by the concept of $\epsilon$-eluder dimension, which is a measure of complexity of a set of functions. An action $a$ is independent of actions $\{a_1,\ldots,a_n\}$ with respect to $\mathcal{F}$ if we can find two functions in $\mathcal{F}$ can have similar function values at $\{a_1,\ldots,a_n\}$ but very different function values at $a$. Based on this notion of $\epsilon$-independence, we can define the notion of $\epsilon$-eluder dimension in a similar fashion to how we define dimension based on linear independence in linear algebra.

A key assumption of this framework is that the model is well-specified. The true reward function $f$ is not only included in the function class $\mathcal{F}$, but also a sample from the posterior distribution $p_0$. Being a Bayesian statistician, I am naturally curious about the ‘miss-specified’ case. Will the regret bound decompose in terms of distance from $f$ to $\mathcal{F}$ and complexity of $\mathcal{F}$. Is the analysis feasible and what techniques should we use?

Reference:

Grant, J. & Leslie, D.. (2020). On Thompson Sampling for Smoother-than-Lipschitz Bandits. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:2612-2622

Russo, D., & Van Roy, B. (2014). Learning to optimize via posterior sampling. Mathematics of Operations Research39(4), 1221-1243.

## SRN – the magical 0.574

Hello 2021! This week, I keep reading about optimal scaling of Markov chain Monte Carlo algorithms and I move on to a new class of algorithms: MALA. This time, the magical number is 0.574.

Hello 2021! Here is the first post in the Sunday Reading Notes series in the year of 2021. This week, I keep reading about optimal scaling of Markov chain Monte Carlo algorithms and I move on to a new class of algorithms: MALA.

Roberts and Rosenthal (1998, JRSSB) concerns the optimal scaling of Metropolis-Adjusted Langevin algorithm (MALA) for target distributions of the form ${\pi^d(x) = \prod_{i=1}^d f(x_i)}$. Unlike random walk Metropolis algorithm, in MALA, the proposal distribution uses the gradient of ${\pi^d}$ at ${x_t}$:

$\displaystyle x^{\star} = x_t + \sigma_n Z_{t+1} + \frac{\sigma^2}{2} \nabla \log (\pi^d(x_t)).$

Here ${\sigma_n}$ is the step variance and ${Z_{t+1} \sim \mathcal{N}(0,I_d)}$ is a d-dimensional standard normal. Following this proposal, we perform an accept / reject step, so that the resulting Markov chain has the desired stationary distribution:
$\displaystyle \alpha_d(x_t,x^{\star}) = \min \left\{ 1, \frac{\pi^d(x^{\star})}{\pi^d(x_t)} \frac{q_d(x^{\star},x_t)}{q_d(x_t,y_{t+1})}\right\}.$

There are some smoothness assumptions on the log-density ${g = \log f}$: absolute value of the first 8 order derivatives are bounded, ${g'}$ is Lipschitz, and all moments exist for ${f}$.

Theorem 3 (diffusion limit of first component)

Consider the chain ${\{X\}}$ starting from stationarity and following the MALA algorithm with variance ${\sigma_d^2 = l^2 n^{-1/3}}$ for some ${l > 0}$. Let ${\{J\}}$ be a Poisson process with rate ${n^{1/3}}$, ${\Gamma^n_t = X_{J_t}}$ and ${U^d}$ is the first component of ${\Gamma^n}$, i.e. ${U^d_t = X^d_{J_t,1}}$. We must have, as ${d \to \infty}$, the process ${U^d \to U}$, where ${U}$ is a Langevin diffusion defined by
$\displaystyle dU_t = \sqrt{h(l)} dB_t + \frac{1}{2} h(l) \frac{d}{dx} \log (\pi_1(U_t))dt.$

Here ${h(l) = 2 l^2 \Phi(-Kl^3/2)}$ is the speed measure and ${K = \sqrt{\mathbb{E} \left[\frac{5g'''(X)^3 - 3g''(X)}{48}\right]} > 0}$.
The skeleton of the proof is similar to that of proving diffusion limit of random walk Metropolis. We will find sets ${F_d^* \subseteq \mathbb{R}^d}$ such that ${\mathbb{P}\left(\Gamma_s^d \in F_d^{\star} \ \forall \ 0 \le s \le t\right) \to 1}$ and on these sets the generators converges uniformly.

Theorem 4 (convergence of acceptance rate)
$\displaystyle \lim_{d \to \infty} \alpha_d(l) = \alpha(l)$ where ${a(l) = 2 \Phi(-Kl^3/2).}$
As a result, when ${h(l)}$ is maximized, the acceptance rate ${\alpha(l) \approx 0.574}$. Also the step variance should be tuned to be ${\sigma_d^2 = (\hat{l}_d)^2 n^{-1/3}.}$

MALA enjoys a faster convergence rate of order ${n^{1/3}}$ compared to order ${n}$ of RWM. But let’s also keep in mind that MALA requires an extra step to compute the gradients ${\log \pi(x) = \sum_{i=1}^n g'(x)}$, which takes ${n}$ steps to compute.

Reference:

Roberts, G. O., & Rosenthal, J. S. (1998). Optimal scaling of discrete approximations to Langevin diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology)60(1), 255-268.

## SRN – about the magical 0.234 acceptance rate

Sunday Reading Notes series is back : Let’s understand the magical rule of ‘tuning your MH algorithm so that the acceptance rate is roughly 25%’ together!

‘Tune your MH algorithm so that the acceptance rate is roughly 25%’ has been general advice given to students in Bayesian statistics classes. It has been almost 4 years since I first read about it from the book Bayesian Data Analysis, but I never read the original paper where this result first appeared. This Christmas, I decided to read the paper ‘Weak Convergence and Optimal Scaling of Random Walk Metropolis Algorithms’ by Roberts, Gelman and Gilks and to resume by Sunday Reading Notes series with a short exposition of this paper.

In Roberts, Gelman and Gilk (1997), the authors obtain a weak convergence result for the sequence of algorithms targeting the sequence of distributions ${\pi_d(x^d) = \prod_{i=1}^{d} f(x_i^d)}$ converging to a Langevin diffusion. The asymptotic optimal scaling problem becomes a matter optimizing the speed of the Langevin diffusion, and it is related to the asymptotic acceptance rate of proposed moves.

A one-sentence summary of the paper would be

if you have a d-dimensional target that is independent in each coordinate, then choose the step size of random walk kernel to be 2.38 / sqrt(d) or tune your acceptance rate to be around 1/4.

Unfortunately, in practice the ‘if’ condition is often overlooked and people are tuning the acceptance rate to be 0.25 as long as the proposal is random walk, no matter what the target distribution is. It has been 20 years since the publication of the 0.234 result and we are witnessing the use of MCMC algorithms on more complicated target distributions, for example parameter inference for state-space models. I feel that this is good time that we revisit and appreciate the classical results while re-educating ourselves on their limitations.

Reference:

Roberts, G. O., Gelman, A., & Gilks, W. R. (1997). Weak convergence and optimal scaling of random walk Metropolis algorithms. The annals of applied probability7(1), 110-120.

——–TECHNICAL EXPOSITION——-

Assumption 1 The marginal density of each component $f$ is such that ${f'/f}$ is Lipschitz continuous and

$\displaystyle \mathbb{E}_f\left[\left(\frac{f'(X)}{f(X)}\right)^8\right] = M < \infty, \ \ \ \ \ (1)$

$\displaystyle \mathbb{E}_f\left[\left(\frac{f''(X)}{f(X)}\right)^4\right] < \infty. \ \ \ \ \ (2)$

Roberts et al. (1997) considers random walk proposal ${y^d - x^d \sim \mathcal{N}(0,\sigma_d I_d)}$ where ${\sigma_d^2 = l^2 / (d-1).}$ We use ${X^d = (X_0^d,X_1^d,\ldots)}$ to denote the Markov chain and define another Markov process ${(Z^d)}$ with ${Z_t^d = X_{[dt]}^d}$, which is the speed-up version of ${X^d}$. Let ${[a]}$ denote the floor of ${a \in \mathbb{R}}$. Define ${U^d_t= X^d_{[dt],1}}$, the first component of ${X_{[dt]}^d = Z^d_t}$.

Theorem 1 (diffusion limit of first component) Suppose ${f}$ is positive and in ${\mathbb{C}^2}$ and that (1)-(2) hold. Let ${X_0^{\infty} = (X^1_{0,1},X^{2}_{0,2},\ldots)}$ be such that all components are distributed according to ${f}$ and assume ${X^{i}_{0,j} = X^{j}_{0,j}}$ for all ${i \le j}$. Then as ${d \to \infty}$,
$\displaystyle U^d \to U.$

The ${U_0 \sim f}$ and ${U}$ satisfies the Langevin SDE
$\displaystyle dU_t = (h(l))^{1/2}dB_t + h(l)\frac{f'(U_t)}{2f(U_t)}dt \ \ \ \ \ (3)$

and
$\displaystyle h(l) = 2 l^2 \Phi(-l\sqrt{I}/2)$

with ${\Phi}$ being the standard normal cdf and
$\displaystyle I = \mathbb{E}_f\left[\left(f'(X)/ f(X)\right)^2\right].$

Here ${h(l)}$ is the speed measure of the diffusion process and the most efficient’ asymptotic diffusion has the largest speed measure. ${I}$ measures the roughness’ of ${f}$.

Example 1 If ${f}$ is normal, then ${f(x) = (2\pi\sigma^2_f)^{-1/2}\exp(-x^2/(2\sigma_f^2)).}$
$\displaystyle I = \mathbb{E}_f\left[\left(f'(x) / f(x) \right)^2\right] = (\sigma_f)^{-4}\mathbb{E}_f\left[x^2\right] = 1/\sigma^2_f.$

So when the target density ${f}$ is normal, then the optimal value of ${l}$ is scaled by ${1 / \sqrt{I}}$, which coincides with the standard deviation of ${f}$.

Proof: (of Theorem 1.1) This is a proof sketch. The strategy is to prove that the generator of ${Z^n}$, defined by

$\displaystyle G_n V(x^n) = n \mathbb{E}\left[\left(V(Y^n) - V(x^n)\right) \left( 1 \wedge \frac{\pi_n(Y^n)}{\pi_n(x^n)}\right)\right].$

converges to the generator of the limiting Langevin diffusion, defined by
$\displaystyle GV(x) = h(l) \left[\frac{1}{2} V''(x) + \frac{1}{2} \frac{d}{dx}(\log f)(x) V'(x)\right].$

Here the function ${V}$ is a function of the first component only.
First define a set

$\displaystyle F_d = \{|R_d(x_2,\ldots,x_d) - I| < d^{-1/8}\} \cap \{|S_d(x_2,\ldots,x_d) - I| < d^{-1/8}\},$

where
$\displaystyle R_d(x_2,\ldots,x_d) = (d-1)^{-1} \sum_{i=2}^d \left[(\log f(x_i))'\right]^2$

and
$\displaystyle S_d(x_2,\ldots,x_d) = - (d-1)^{-1} \sum_{i=2}^d \left[(\log f(x_i))''\right].$

For fixed ${t}$, one can show that ${\mathbb{P}\left(Z^d_s \in F_d , 0 \le s \le t\right)}$ goes to 1 as ${d \to \infty}$. On these sets ${\{F_d\}}$, we have
$\displaystyle \sup_{x^d \in F_d} |G_d V(x^d) - G V(x_1)| \to 0 \quad \text{as } d \to \infty ,$

which essentially says ${G_d \to G}$, because we have uniform convergence for vectors contained in a set of limiting probability 1.
$\Box$

Corollary 2 (heuristics for RWMH) Let
$\displaystyle a_d(l) = \int \int \pi_d(x^d)\alpha(x^d,y^d)q_d(x^d,y^d)dx^d dy^d$

be the average acceptance rate of the random walk MH in ${d}$ dimensions.
We must have ${lim_{d\to\infty} a_d(l) \to a(l)}$ where ${a(l) = 2 \Phi(-l\sqrt{I}/2)}$.
${h(l)}$ is maximized by ${l = \hat{l} = 2.38 / \sqrt{I}}$ and ${a(\hat{l}) = 0.23}$ and ${h(\hat{l}) = 1.3 / I.}$
The authors consider two extensions of the target density ${\pi_d}$, where the convergence and optimal scaling properties will still hold. The first extension concerns the case where ${f_i}$‘s are different, but there is an law of large numbers on these density functions. Another extension concerns the case ${\pi_d(x^d) = f_1(x_1) \prod_{i=2}^{d} P(x_{i-1}, x_{i})}$, with some conditions on ${P}$.

## Troubleshooting: build packages in RStudio after MacOS update

I have never considered myself a good developer. Coding activities such as writing some functions in Rcpp for an R package and building my personal website (nianqiaoju.github.io) already makes me proud.

Last night, I had some trouble building a package in RStudio. At the beginning, I thought it was because I accidentally included a test file in the ProjectDirectory/R folder, but it was not the case. So I had no choice but to read the error messages carefully.

I did not keep a record of this, but important bits of the messages look like:

ld: unknown option: -platform_versionclang: error: linker command failed with exit code 1 (use -v to see invocation)

That’s when I started to realize that this had to do with a recent MacOS update and probably my package was fine. Interestingly, I did not recall authorizing a system update and learned about this only through running sessionInfo() in console. These are some trouble shooting steps I did in RStudio console:

• check if devtools works: devtools::load_all() –>> leads to error messages
• check if Rcpp works: Rcpp::evalCpp("2+2") –>> leads to error messages.

It seemed like the operating system update has ‘disabled’ my R compiler tools for RCpp. Unfortunately, despite the huge amount of information on stack-overflow and github issues, I did not see any error messages matching exactly what I saw and there were some conflicting information. Moreover, I did not want to install the whole XCode app or to cause any irreversible damages. After one hour of trial and errors, below is what I think solved my crisis.

• run in terminal xcode-select --install and it turns out my command line tools are already installed since the message xcode-select: error: command line tools are already installed, use "Software Update" to install updates.
• run in terminal gcc --version and this is to confirm that the command gcc is recognized by my terminal.
• set the file  ~/.R/Makevars to point to the system headers. I did it with vim: add the following lines to the file and comment out what’s already there.
# clang: start
CFLAGS=-isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
CCFLAGS=-isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
CXXFLAGS=-isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk
# clang: end


I believe that the last step did the magic for me and I was able to build packages from source without any trouble.

In this process this stack-overflow discussion and this blogpost were the most helpful.

## Getting started with RStudio on AWS in 4 steps

This blog is a quick guide to setting up an RStudio server on Amazon Web Services. The credits should be given to Louis Aslett for setting up the Amazon Machine Images and making life easier for all of us.

Step 1 : creating an AWS account

A free account give access to the t2.micro serves, which means that you can run it for 750 hours a month.

Step 2: choosing an Amazon Machine Image (AMI) prepared by Louis Aslett

I can be painfully to boot your own virtual machine and install R and RStudio yourself. Louis Aslett has already done the job for us and this is a huge service to the community!!!

He has prepared some virtual machines for us and installed R,RStudio, Latex, etc. for us. You can choose one of the AMIs based on your location and the links will direct you to choose an instance type (the default is the free tier t2.micro). We can rely on the default settings until ‘Step 6: Configure Security Group”. Click on “Add Rule” and select “HTTP”, to allow internet traffic to this instance, and change “Source” to specify one IP address or an IP address range. Setting “Source” to “Anywhere” allows all IP addresses to access the instance and you can also set it to “My IP”. You can also create a new security group on this page.

While this is not secure, we can “proceed without a key pair” to launch the instances.

Step 3: changing password for the RStudio Server

It can take a few minutes to launch the instance. Once the instance is “running”, you can copy and paste the “Public IPv4 address” from the instance summary to your browser. You will see a RStudio Server. The initial username is “rstudio” and the password is “<Your instance ID>”.

This AMI comes with a pre-installed and loaded package “RStudioAMI”. We can use the the function passwd() to change the password for the RStudio server.

Step 4: Link Dropbox or Github repository

If you use Dropbox to sync files, you can get started by running the function linkDropbox(). Of course we do not want to sync our entire Dropbox account here. We can sync our project folder by excludeSyncDropbox("*") followed by includeSyncDropbox("projectFolder"). It seems to me that we can only sync directories differently under Dropbox, not subdirectories like “projectFolder/subDir”.

## [TM] Stein’s lemma

This post is about Stein’s lemma, which first appeared in the landmark paper of Stein in 1981. This lemma is leads to Stein’s unbiased risk estimator and is useful for proving central limit theorems. It has also been called Gaussian integration by parts’, which is in fact a high level description of the proof.

Lemma 1 (Stein’s lemma) If ${y}$ follows the standard normal distribution, then

$\displaystyle \mathbb{E}\left[x f(x) \right] = \mathbb{E}\left[f'(x)\right],$

if the expectations are well-defined.

Proof: If ${f}$ is nice’, then

$\displaystyle \begin{array}{rcl} \mathbb{E}\left[f'(x)\right] &=& \int_{-\infty}^{\infty} f'(x) \frac{\exp(-x^2/2)}{\sqrt{2\pi}} dx\\ &=& 0 - \int_{-\infty}^{\infty} f(x) \left(-x \frac{\exp(-x^2/2)}{\sqrt{2\pi}} \right) dx \\ &=& \mathbb{E}\left[x f(x)\right]. \end{array}$

$\Box$

It is also convinient to denote ${\phi(x)}$ as the standard Normal density and remember that

$\displaystyle \phi'(x) = -x \phi(x).$

Stein’s lemma can be generalized to exponential family distributions. In particular, for multivariate normals, if ${y \sim \mathrm{NVM}_{p}(\mu, \sigma^2 I)}$ and ${\hat{\mu} = r(y)}$ is any differentiable estimator, then we have

$\displaystyle \textrm{cov}\left(\hat{\mu}_i, y_i\right) = \sigma^2 \mathbb{E}\left[\frac{\partial \hat{\mu}_i}{\partial y_i}\right].$

This is Equation (12.56) in Computer Age Statistical Inference’ by Efron and Hastie.

This post is the second in the category trivial matters’, where I formally write down some notes to myself about identities, tricks, and facts that I repeatedly use but (unfortunately) need to re-derive everytime I use them. Although these posts are short, they discuss important topics. The term trivial matters’ is used as a sarcasm, because my blood boils everytime I see terms like obviously’ or it is trivial to show that …’ when I grade students’ homeworks.

## [TM] Change of Variables in MCMC

This post is about change of variables in Markov chain Monte Carlo (MCMC), which is used quite often when the target distribution is supported on a subset of ${\mathbb{R}^n}$. For example, the Exponential distribution and the Log-Normal distribution are only supported on positive reals.

Consider a target distribution ${\pi(x)}$ that is supported on a subset ${S \subset \mathbb{R}^n}$. If we use a random walk proposal ${q_X(x' \mid x) = \mathrm{MVN}(x' ; x,\Sigma)}$, then we might end up with a proposal ${x'}$ such that ${\pi(x') = 0}$ and, this might cause too few acceptance in the MCMC chain. If we can find a transformation ${h:D \to \mathbb{R}^n}$ that is one-to-one, differentiable and spans ${\mathbb{R}^n}$, then we can consider a proposal ${x' = h^{-1}(y')}$ where ${y' \sim q_Y(\cdot \mid y = h(x))}$. This proposal always yields a proposal ${x'}$ such that ${\pi(x') > 0.}$

Of course, when we employ such a transformation in the proposal kernel, we need to be careful about evaluating the proposal densities. We know that the acceptance probability is ${\alpha(x,x') = 1 \wedge \frac{\pi(x') q(x \mid x')}{\pi(x) q(x' \mid x)}}$, and it should be no surprise that ${q_X(x' \mid x) \not= q_Y(y' \mid y)}$ unless ${h}$ is the identity map.

Let’s work out the acceptance ratio together carefully. Recall that change of variables proceeds as follows: when ${Y \sim f_Y(y) }$ and we consider the transformation ${ y = h^{-1}(x)}$, the pdf of ${X = h^{-1}(Y)}$ is

$\displaystyle f_X(x) = f_Y(h(x))|J_{h}(x)|.$

When we apply this to the kernels ${q_Y}$ and ${q_X}$ we get that

$\displaystyle q_X(x' \mid x ) = q_Y(h(x') \mid h(x)) \cdot |J_{h}(x')|.$

Example 1 {Symmetric proposal on transformed space} If ${q_Y(y' \mid y)}$ is a symmetric proposal, then the acceptance probability becomes

$\displaystyle \alpha(x,x') = 1 \wedge \frac{\pi(x') |J_{h}(x)|}{\pi(x)|J_{h}(x')|} .$

Here are two common transformations.

Example 2 (Log-transformation for ${x}$ supported on ${\mathbb{R}_+}$)

If ${h = \log}$, then ${|J_{h}(x)| = 1 / x}$ and acceptance probability is

$\displaystyle \alpha(x,x') = 1 \wedge \frac{\pi(x')x'}{\pi(x)x}.$

Example 3 (Logit transformation for ${x}$ supported on ${(0,1)}$) If ${h(x) = \log(\frac{x}{1 - x})}$,then the inverse transformation is ${h^{-1}(y) = \frac{exp(y)}{1 + \exp(y)}.}$ The acceptance probability is

$\displaystyle \alpha(x,x') = 1 \wedge \frac{\pi(x')x'(1-x')}{\pi(x)x(1-x)}.$

This post is the first one in the category trivial matters’, where I formally write down some notes to myself about tricks and facts that I repeatedly use but (unfortunately) need to re-derive everytime I use them.

## SRN – “Scalable Metropolis-Hastings for Exact Bayesian Inference with Large Datasets” by Cornish et al.

When faced with large datasets, standard MCMC methods have a cost of O(n) per step due to evaluating the log-likelihood for n data points. Can we make it faster?

My COVID-19 self-isolation plan includes trying to finish as many to-read articles as possible and to blog my progress on this blog as much as possible. This weekend, I read the paper “Scalable Metropolis-Hastings for Exact Bayesian Inference with Large Datasets” by Robert Cornish, Paul Vanetti, Alexandre Bouchard-Cȏté, George Deligiannidis, and Arnaud Doucet. This is an ICML 2019 paper and it is a dense one. It attempts to solve one problem. When faced with large datasets,  standard MCMC methods have a cost of O(n) per step due to evaluating the log-likelihood for n data points. Can we make it faster? In a typical MH step (with symmetric proposals), we accept the proposal with probability $\alpha(\theta,\theta') = \min(1, \frac{\pi(\theta')}{\pi(\theta)})$. This is typically implemented with

log_alpha <- log_post(theta_proposal) - log_post(theta_current);
if (log(runif(1)) < log_alpha ){
theta_current <- theta_proposal;
}


The authors realize that it is wasteful having to evaluate the log-likelihood at all data points before rejecting the proposal. Their solution, Salable Metropolis-Hastings (SMH), is based on a combination of three ideas: (1) factorization of posterior density (trivial factorization is 1 + n factors), (2) fast simulation of rejection using thinning of a Poisson point process, and (3) posterior concentration around mode when n is large.

I first get excited about this paper seeing the fast simulation of Bernoulli random variables (in Algorithm 1) because there is a very similar problem on simulating a Poisson process with inhomogenous rates in Stat 210, for which I was a Teaching Fellow (TF). I am very happy to see this result being useful in cutting-edge research. The fast simulation relies on access to a lower bound for each factor in the posterior distribution and the tighter this lower bound the more efficient this implementation. Suppose the target distribution has m factors, then once we pay a once-off O(m) cost, each acceptance step can be implemented in O(upper bound of negative-log-posterior) number of steps on average.

Another highlight of the paper is the smart application of the Bernstein-von Mises (BvM) phenomenon, that which states that posterior distribution concentrates around the mode with rate $1/\sqrt{n}$ as n goes to infinity. BvM was an important topic in Stat 213, which is another class that I TFed. I have seen BvM used (1) to select step-size in random-walk MH, (2) to initialize an MH chain, and (3) to motivate Laplace approximation of the posterior distribution. In this paper, it motivates a Taylor expansion of the potential function near posterior mode, which subsequently leads to the factorization behind the SMH kernel. The SMH kernel leaves the target distribution invariant, and, if not mistaken by me, this property explains the use of “exact” in the title. More importantly, since the acceptance probability in SMH is factorizable, we can implement the acceptance step with the aforementioned fast simulation, provided that log-density can be differentiated enough times. Intuitively, the higher the differentiability, the closer the Taylor expansion, the tighter the bound for acceptance rate and the fewer the number of evaluations we need. In fact, as later justified with Theorem 3.1, the average computational cost decays geometrically with respect to the order of Taylor expansion. With a second-order Taylor expansion, which requires 3rd-order differentiability, the expected computational cost behaves like $O(1/\sqrt{n})$. Hooray! I am always a big fan of “the bigger the problem, the easier the solution”.

I learn a lot from this paper, especially from the opening paragraph of Section 3 and Section 3.2. But there is a question that I keep asking myself: if n is very large and we have an estimate of posterior mode that is $O(1/\sqrt{n})$ distance from the true mode, how long does it take for SMH to be better than that declaring the posterior is Gaussian by invoking BvM and sleeping on the result?

## SRN – my second attempt to understand ‘Controlled SMC’ by Heng et al.

It has been two years since I last tried to understand the paper ‘Controlled Sequential Monte Carlo’ by Jeremy Heng et al. Over the past two years, I picked this paper up many times, and every time I read it, I understood it a little better. I am finally ready to share my understanding of this paper on this blog.

It has been two years since I last tried to understand the paper ‘Controlled Sequential Monte Carlo’ by Jeremy Heng et al. Over the past two years, I picked this paper up many times, and every time I read it, I understood it a little better. I am finally ready to share my understanding of this paper on this blog.

This paper opens with an introduction to the Feynman-Kac path measure. A triple of (1) initial distribution, (2) a collection of Markov kernels, and (3) potential functions determines a Feynman-Kac path measure. For readers familiar with state-space models, this is not something new: the triple can correspond to (1) the initial distribution of hidden process, (2) Markov kernels of the hidden process and (3) observation densities.

For each Feynman-Kac path measure, there is a sequential Monte Carlo algorithm (aka particle filters) that provides a particle approximation of the path measure. Furthermore, Feynman-Kac path measures can be twisted. Although all the kernels and the potentials are changed, the path measure is the same at the terminal time and the normalizing constant at the terminal time $Z_T$ stays the same.

To infer state-space models, we are interested in the smoothing distribution $x_{0:T}$ given $y_{0:T}$ or the marginal likelihood $p(Y_{0:T})$. The smoothing distribution has Feynman-Kac measure representations, with the marginal likelihood being the normalizing constant.  The most natural one is described above, and it corresponds to the bootstrap particle filter. But it is not unique. For example, the path measure associated with a 1-step look ahead kernel corresponds to the popular ‘fully-adapted auxiliary particle filter’. As mentioned above, each of these path measures can be twisted. Proposition 1 defines an optimal policy with respect to each path measure. Optimality is defined in the sense of approximating the normalizing constant almost surely. The optimal policy with respect to the path measure corresponding to the bootstrap particle filter is the backward information filter $\phi^{\star}(x_t) = p(y_{t:T}|x_t)$

The optimal policies $\phi^{\star}$ are usually intractable, but they can be approximated. After running any SMC algorithm, we have a particle approximation of the path measure, which gives us a particle approximation of the optimal policy. We can use regression to learn a simple policy $\phi$ that minimized the difference between $\log\phi$ and particle approximation of $\log\phi^{\star}$. This regression is done recursively starting at $t = T$ and ending at $t = 0$, and it is called approximate dynamic programming in the paper.  Given this policy, we have a new path measure, which means that we can run another particle filter according to it and find the optimal policy with respect to the new measure. This process is called policy refinement. According to the authors, in practice, only a few iterations of policy refinement is necessary. They also offer performance monitoring criteria based on effective sample size and regression residuals.

What fascinates (and puzzles) me is that errors do not accumulate over either iteration of policy refinement or approximate dynamic programming. I hope to read this paper is greater detail and be able to report back to my readers soon.