SRN – Controlled Sequential Monte Carlo by Heng et al. (Part 1)

Returning to my Sunday Reading Notes series from Thanksgiving recess, I want to discuss the paper ‘Controlled Sequential Monte Carlo’ by Jeremy Heng, Adrian N. Bishop, George Deligiannidis and Arnaud Doucet.

Returning to my Sunday Reading Notes series from Thanksgiving recess, I want to discuss the paper ‘Controlled Sequential Monte Carlo’ by Jeremy Heng, Adrian N. Bishop, George Deligiannidis and Arnaud Doucet. You can read Xi’an’s blog post discussing this paper.

As pointed out in Xi’an’s blog, this is a massive paper, and I have only managed to go through Section 3. This paper starts with introducing Feynman-Kac models and twisted Feynman-Kac models, which is motivated by designing observations dependent kernels for sequential Monte Carlo (SMC) methods. At the terminal time T, the twisted model and original model are the same and at all times t = 0,1,2,\cdots, T the two models have the same normalizing constant which we try to approximate with SMC. With a current policy \psi, there is a policy refinement \psi^\star = \psi \cdot \phi^\star that is a admissible policy and \psi^star is referred to as the optimal policy by the authors because of its connection with Kullback-Leibler optimal control problem.

The policy refinements can be found using backward recursion \psi^\star_T = G_T^\phi and \psi^\star_t = Q_t^\phi \phi^{\star}_{t+1} for t = 0,\cdots, T-1 where Q are Bellman operators. Because the policies \psi^\star are intractable, the authors introduce logarithmic projection as a means of approximation. This is the Approximate Dynamic Programming (ADP) algorithm that the authors outlined in Algorithm 2. The final algorithm (Algorithm 3) alternates between performing a twisted SMC and ADP to find the next policy refinement.


References:

Heng, J., Bishop, A. N., Deligiannidis, G., & Doucet, A. (2017). Controlled Sequential Monte Carlo. arXiv preprint arXiv:1708.08396.

SRN – The Iterated Auxiliary Particle Filter by Guarniero et al.

In this week’s Sunday Reading Post, I want to discuss the paper ‘The Iterated Auxiliary Particle Filter’ by Pieralberto Guarniero, Adam Johansen and Antony Lee.  The algorithm proposed, iAPF, approximates an idealized particle filter, which would have zero variance in terms of estimating the marginal likelihood of a hidden Markov model. It is worth mentioning that finding if we were to use an idealized particle filter, we would only need one particle instead of a system of particles for marginal likelihood estimations and that the idealized particle filter would be the backward smoothing provided a perfect forward filtering step. 

In this week’s Sunday Reading Post, I want to discuss the paper ‘The Iterated Auxiliary Particle Filter’ (iAPF) by Pieralberto Guarniero, Adam Johansen and Antony Lee.  

The algorithm proposed, iAPF, approximates an idealized particle filter, which would have zero variance in terms of estimating the marginal likelihood of a hidden Markov model. The motivation is that if we were to use an idealized particle filter, we would only need one particle instead of a system of particles for marginal likelihood estimations. The idealized particle filter would be the backward-smoothing provided a perfect forward-filtering step. 

This paper builds upon previous studies of particle filters, including twisted particle filter and the Auxiliary Particle Filter (APF). The idea is to introduce a family of ‘twisted’ models through functions \psi in order to define a new model that would have the same marginal likelihood as the original model. Because the optimal policy \psi^\star,which corresponds to the idealized particle filter, is intractable in general cases, the authors use a series of functional approximations \psi^0,\psi^1, \cdots, \psi^l to approximate \psi^\star.

The key the question to me is how to and how well can we approximate \psi^\star. The ‘how to’ qustions is addressed in Section 3.3. The series of backward recursions that are used to approximate \psi^\star relies on the iterative relationship \psi^\star(x_t) = p(y_t|x_t) f(x_t,\psi^\star_{t+1}) = p(y_t|x_t)\int_X f(x_{t},x_{t+1})\psi^{\star}_{t+1}(x_{t+1}) dx_{t+1}. In Algorithms 3, the authors suggest that we ‘choose \psi^{t} on the basis of \psi_{t}^{1:N} and \psi_t^{1:N}. Later in Section 5, they mention that this step is implemented with a parametric optimization step for their experiments. In the ‘Discussions’ section, the authors mention alternatives such as nonparametric estimates which would require much higher cost. More importantly is the ‘how well’ part, because it is also possible that the class of functions \Psi that we consider does not contain the optimal function \psi. I think this is a very interesting and importantly case and the authors report ‘fairly well’ performance of iAPF in this case. 

As demonstrated in the ‘Applications and Examples’ section, “iAPF can provide substantially better estimates of the marginal likelihood than the Bootstrap Particle Filter (BPS) at the same computational cost.

Log-likelihood estimates using BPF with N = 1000,10000 particles and iAPF with N = 100 particles. We can see from the width of the box plots that iAPF has smaller variances compared BPF by using fewer particles.  Source: (Guarniero et al 2017)

References:

  • Guarniero, P., Johansen, A. M., & Lee, A. (2017). The iterated auxiliary particle filter. Journal of the American Statistical Association112(520), 1636-1647.
  • Pitt, M. K., & Shephard, N. (1999). Filtering via simulation: Auxiliary particle filters. Journal of the American statistical association94(446), 590-599
  • Whiteley, N., & Lee, A. (2014). Twisted particle filters. The Annals of Statistics42(1), 115-141.