This week I am reading ‘Informed proposals for local MCMC in discrete spaces‘ by Giacomo Zanella. This paper is about designing MCMC algorithms for discrete-values high-dimensional parameters, and the goal is similar to the papers discussed in previous posts (Hamming ball sampler & auxiliary-variable HMC). I decide to split the Sunday Reading Notes on this paper into two parts, because I find many interesting ideas in this paper.

In this paper, Zanella come up with locally-balanced proposals. Suppose is the target density and is an uninformed proposal. We assume that as the kernel converges to the delta measure. Zanella seeks to modify this uninformed proposal so that it incorporates information about the target and is biased towards areas with higher density. An example of* locally-balanced proposals* is . This kernel is reversible with respect to , which converges to as [Note the normalizing constatn is the convolution ]

More generally, Zanella considers a class of *pointwise informed* *proposals* that has the structure It is suggested that the function satisfy

I will save the discussion on locally-balanced proposals and Peskun optimality to Part II. In this part, I want to discuss Section 5: Connection to MALA and gradient-based MCMC. In continuous space, the point-wise informed proposal would be infeasible to sample from because of the term If we take a first-order Taylor expansion, we would have If we choose and , this is the MALA proposal.

I find this connection very interesting, although I do not have a good intuition about where this connection comes from. One way to explain it is that gradient-based MCMC in continuous space is using local information to design informed proposals. In the conclusions, the author mentions that this connection should improve robustness of gradient-based MCMC schemes and help with parameter tuning.

References:(x)

- Zanella, G. (2017). Informed proposals for local MCMC in discrete spaces.
*arXiv preprint arXiv:1711.07424*.