Returning to my Sunday Reading Notes series from Thanksgiving recess, I want to discuss the paper ‘Controlled Sequential Monte Carlo’ by Jeremy Heng, Adrian N. Bishop, George Deligiannidis and Arnaud Doucet. You can read Xi’an’s blog post discussing this paper.
As pointed out in Xi’an’s blog, this is a massive paper, and I have only managed to go through Section 3. This paper starts with introducing Feynman-Kac models and twisted Feynman-Kac models, which is motivated by designing observations dependent kernels for sequential Monte Carlo (SMC) methods. At the terminal time , the twisted model and original model are the same and at all times the two models have the same normalizing constant which we try to approximate with SMC. With a current policy , there is a policy refinement that is a admissible policy and is referred to as the optimal policy by the authors because of its connection with Kullback-Leibler optimal control problem.
The policy refinements can be found using backward recursion and for where are Bellman operators. Because the policies are intractable, the authors introduce logarithmic projection as a means of approximation. This is the Approximate Dynamic Programming (ADP) algorithm that the authors outlined in Algorithm 2. The final algorithm (Algorithm 3) alternates between performing a twisted SMC and ADP to find the next policy refinement.
Heng, J., Bishop, A. N., Deligiannidis, G., & Doucet, A. (2017). Controlled Sequential Monte Carlo. arXiv preprint arXiv:1708.08396.
In this week’s Sunday Reading Post, I want to discuss the paper ‘The Iterated Auxiliary Particle Filter’ (iAPF) by Pieralberto Guarniero, Adam Johansen and Antony Lee.
The algorithm proposed, iAPF, approximates an idealized particle filter, which would have zero variance in terms of estimating the marginal likelihood of a hidden Markov model. The motivation is that if we were to use an idealized particle filter, we would only need one particle instead of a system of particles for marginal likelihood estimations. The idealized particle filter would be the backward-smoothing provided a perfect forward-filtering step.
This paper builds upon previous studies of particle filters, including twisted particle filter and the Auxiliary Particle Filter (APF). The idea is to introduce a family of ‘twisted’ models through functions in order to define a new model that would have the same marginal likelihood as the original model. Because the optimal policy ,which corresponds to the idealized particle filter, is intractable in general cases, the authors use a series of functional approximations to approximate
The key the question to me is how to and how well can we approximate . The ‘how to’ qustions is addressed in Section 3.3. The series of backward recursions that are used to approximate relies on the iterative relationship In Algorithms 3, the authors suggest that we ‘choose on the basis of and Later in Section 5, they mention that this step is implemented with a parametric optimization step for their experiments. In the ‘Discussions’ section, the authors mention alternatives such as nonparametric estimates which would require much higher cost. More importantly is the ‘how well’ part, because it is also possible that the class of functions that we consider does not contain the optimal function . I think this is a very interesting and importantly case and the authors report ‘fairly well’ performance of iAPF in this case.
As demonstrated in the ‘Applications and Examples’ section, “iAPF can provide substantially better estimates of the marginal likelihood than the Bootstrap Particle Filter (BPS) at the same computational cost.“
Guarniero, P., Johansen, A. M., & Lee, A. (2017). The iterated auxiliary particle filter. Journal of the American Statistical Association, 112(520), 1636-1647.
Pitt, M. K., & Shephard, N. (1999). Filtering via simulation: Auxiliary particle filters. Journal of the American statistical association, 94(446), 590-599
Whiteley, N., & Lee, A. (2014). Twisted particle filters. The Annals of Statistics, 42(1), 115-141.