SRN – Informed proposals for local MCMC in discrete spaces by Zanella (Part II)

This is a super delayed Sunday Reading Notes post and today I come back to finish discussing the informed proposals paper by Giacomo Zanella. In Part I of my discussions, I reviewed point-wise informed proposals and locally-informed criterion. In Part II, I hope to focus on asymptotical optimality of locally informed proposals and their simulation studies.

The author uses Peskun ordering to compare efficiency of MH schemes. He deduces that ‘locally-balanced proposals are asymptotically optimal in terms of Peskun ordering’ as dimensionality of the underlying state space increases. Conditions for asymptotic optimality are indeed mild (Proposition 1) for the three illustrative examples: 1) independent binary components, 2) weighted permutation and 3) Ising model.

In Section 4, Zanella points out a connection between the balancing function and acceptance probability function (APF) for MH algorithms, which I find very interesting. He also shows that the optimal proposal for independent binary variables is Barker choice g(t) = \frac{t}{1+t}. The proof goes by finding the limiting continuous-time process of the MH chain and finding the optimal g for the limiting process.

The simulation studies use the illustrative examples: weighted permutations and Ising models. The comparisons are in terms of 1) acceptance rate, 2) number of successful flips per computation time and 3) mixing of some summary statistics. The second criterion concerns the trade-off between computational cost (of calculating the informed proposal) and statistical accuracy (by producing efficient moves). For simple target distributions (such as Uniform), using locally-balanced proposals does not bring much benefits but it achieves a much higher number of flips per unit time for more complicated and ‘rough’ targets. See second row of Figure 1 of the paper.

Screen Shot 2018-10-14 at 5.37.28 PM

I find it interesting that globally-balanced proposals (aka ‘naively-informed’ or g(t) = t) are extremely sensitive to initialization. Looking at the effective sample size (ESS) per time, the chains from GB has much more stable behavior if initialized from stationarity. See GB v.s. GB( station.) in Figure 2. But in Figure 5, this phenomenon does not show up for Ising models: initializing from stationarity does not yield ESS performance comparable to that of LBs.

Screen Shot 2018-10-14 at 5.44.23 PM.png

In the simulation studies section, the author emphasizes the cost-vs-efficiency trade-off, which I find very important. I feel I have ignored this aspect of designing MCMC algorithms and should think more about it in my future studies. The author indicates that ‘computations required to sample from locally-balanced proposals are trivially parallelizable’. This is also something very interesting to me and I hope to learn more about multi-core computations during my PhD.

In his discussions, the author makes reference to Multiple-Try Metropolis. The connection between LB proposals and MT proposals is not entirely obvious to me, but I intuitively agree with the author’s comment that the weight function in MT serves a very similar purpose as the balancing function term g\left(\frac{\pi(y)}{\pi(x)}\right) in LB.

Side note:
From my downloaded version of the paper, the transition kernel of the first equation on Page 1 is wrong, because the Q(x,dy) terms needs to be multiplied by its acceptance probability a(x,y).

References:

  •  Zanella, G. (2017). Informed proposals for local MCMC in discrete spaces. arXiv preprint arXiv:1711.07424.
  • Liu, J. S., Liang, F., & Wong, W. H. (2000). The multiple-try method and local optimization in Metropolis sampling. Journal of the American Statistical Association95(449), 121-134.

SRN – Informed proposals for local MCMC in discrete spaces by Zanella (Part I)

This week I am reading ‘Informed proposals for local MCMC in discrete spaces‘ by Giacomo Zanella. This paper is about designing MCMC algorithms for discrete-values high-dimensional parameters, and the goal is similar to the papers discussed in previous posts (Hamming ball sampler & auxiliary-variable HMC). I decide to split the Sunday Reading Notes on this paper into two parts, because I find many interesting ideas in this paper.

In this paper, Zanella come up with locally-balanced proposals. Suppose \pi(x) is the target density and K_{\sigma}(x,dy) is an uninformed proposal. We assume that as \sigma \to 0 the kernel K_{\sigma}(x,dy) converges to the delta measure. Zanella seeks to modify this uninformed proposal so that it incorporates information about the target \pi and is biased towards areas with higher density. An example of locally-balanced proposals is Q_{\sqrt{\pi}} (x,dy) = \frac{\sqrt{\pi(y) }K_{\sigma}(x,dy)}{(\sqrt{\pi} * K_{\sigma})(x)}. This kernel is reversible with respect to \sqrt{\pi(x)}(\sqrt{\pi} * K_{\sigma})(x), which converges to \pi(x)dx as x \to 0. [Note the normalizing constatn is the convolution \sqrt{\pi(x)}* K_{\sigma} = \int \sqrt{\pi(y)} K_{\sigma}(x,dy)].]

More generally, Zanella considers a class of pointwise informed proposals that has the structure Q_{g,\sigma} = \frac{1}{Z_{g}}\cdot g\left(\frac{\pi(y)}{\pi(x)}\right) K_{\sigma}(x,dy). It is suggested that the function g satisfy g(t) = t g(1/t).

I will save the discussion on locally-balanced proposals and Peskun optimality to Part II. In this part, I want to discuss Section 5: Connection to MALA and gradient-based MCMC. In continuous space, the point-wise informed proposal Q_{g,\sigma} would be infeasible to sample from because of the term g\left(\frac{\pi(y)}{\pi(x)}\right) . If we take a first-order Taylor expansion, we would have Q_{g,\sigma}^{(1)} \propto g \left( \exp ( \nabla  \log \pi(x) (y-x)) \right) K_{\sigma}(x,dy). If we choose g(t) = \sqrt{t} and K_{\sigma}(x,\cdot) =N(x,\sigma^2), this is the MALA proposal.

I find this connection very interesting, although I do not have a good intuition about where this connection comes from. One way to explain it is that gradient-based MCMC in continuous space is using local information to design informed proposals. In the conclusions, the author mentions that this connection should improve robustness of gradient-based MCMC schemes and help with parameter tuning.

References:(x)

  •  Zanella, G. (2017). Informed proposals for local MCMC in discrete spaces. arXiv preprint arXiv:1711.07424.

SRN – The Hamming Ball Sampler by Titsias and Yau

My Sunday Reading Notes (SRN) this semester will mostly be about Bayesian Computations. This week’s post is on The Hamming Ball Sampler proposed by Titsias and Yau.The hamming ball sampler is a MCMC algorithm for high-dimensional discrete-valued vectors or matrices.  While reading this paper, I also found a blog post about it from Xi’an’s OG, which provided some high-level intuitions and background knowledge.

The paper considers a state space model with discrete hidden space X with parameters \theta and observations y.  Factorial Hidden Markov Model (fHMM) is an example of such a model. In state space models,  the complete data likelihood can be factorized with p(y,X,\theta) =  p(X,\theta) \prod_{i=1}^N p(y_i|X,\theta). Given some prior, we want to sample from the posterior distribution X,\theta | y.

When the dimension of X is large, we would suffer from ‘the curse of dimensionality’. Using a Gibbs sampler, we can iteratively sample \theta  \sim \cdot | X,y and $\theta X \sim \cdot | \theta,y$. Because the dimension of $X$ is high, we should also consider blocked Gibbs sampling on X by for example updating one row (or column) of X at a time. While this is conceptually straightforward and potentially also easy to implement, as the authors pointed out:

Conditional sampling may lead to an inability to escape from local modes in the posterior distribution particularly if the elements of X exhibit strong correlations with each other and together with \theta.

The Hamming Ball Sampler (HBS) introduces an auxiliary variable U that has the same dimension as the latent space X. The augmented joint probability can be factorized with as p(y,X,\theta,U) = p(U|X) p(y,X,\theta). The conditional distribution p(U|X) is chosen to be uniform over a neighborhood set \mathcal{H}_m(X). This set \mathcal{H}_m(X) is a Hamming Ball and it basically says that if U,X are K \times N matrices, then U and X can be different on at most m positions among the K rows. With the auxiliary variable U, the Hamming Ball Sampler alternate between the steps U \gets p(U|X) and (\theta, X) \gets p(\theta,X|U,y).

The Hamming Ball Sampler is like slice-sampling in discrete spaces, and each Hamming Ball \mathcal{H}_m(X) is a slice. Introducing the slice introduces random exploration, and makes it easier to escape from local modes. For the simplest example where X is a K \times N matrix and the hamming distance is defined the the number of different elements each column, if we set m = K/2 then we can potentially change all elements of X in one update. But when m is large, the algorithm complexity also increases.

In this paper the authors provided several numerical examples comparing the Hamming Ball Sampler with block Gibbs Samplers. In the fHMM examples (Figure 4 in the paper) we can see that HBS with m = 2 or 3 achieves having joint posterior density much faster than the block Gibbs Samplers. They also conclude that HB-2 is best balances computational time and sampling efficiency.

Reference:

Titsias, M. K., & Yau, C. (2017). The Hamming ball sampler. Journal of the American Statistical Association112(520), 1598-1611.

Ghahramani, Z., & Jordan, M. I. (1996). Factorial hidden Markov models. In Advances in Neural Information Processing Systems (pp. 472-478).