My Sunday Reading Notes (SRN) this semester will mostly be about Bayesian Computations. This week’s post is on The Hamming Ball Sampler proposed by Titsias and Yau.The hamming ball sampler is a MCMC algorithm for high-dimensional discrete-valued vectors or matrices. While reading this paper, I also found a blog post about it from Xi’an’s OG, which provided some high-level intuitions and background knowledge.
The paper considers a state space model with discrete hidden space with parameters
and observations
Factorial Hidden Markov Model (fHMM) is an example of such a model. In state space models, the complete data likelihood can be factorized with
Given some prior, we want to sample from the posterior distribution
When the dimension of is large, we would suffer from ‘the curse of dimensionality’. Using a Gibbs sampler, we can iteratively sample
and $\theta X \sim \cdot | \theta,y$. Because the dimension of $X$ is high, we should also consider blocked Gibbs sampling on
by for example updating one row (or column) of
at a time. While this is conceptually straightforward and potentially also easy to implement, as the authors pointed out:
Conditional sampling may lead to an inability to escape from local modes in the posterior distribution particularly if the elements of
exhibit strong correlations with each other and together with
.
The Hamming Ball Sampler (HBS) introduces an auxiliary variable that has the same dimension as the latent space
. The augmented joint probability can be factorized with as
The conditional distribution
is chosen to be uniform over a neighborhood set
This set
is a Hamming Ball and it basically says that if
are
matrices, then
and
can be different on at most
positions among the
rows. With the auxiliary variable
, the Hamming Ball Sampler alternate between the steps
and
The Hamming Ball Sampler is like slice-sampling in discrete spaces, and each Hamming Ball is a slice. Introducing the slice introduces random exploration, and makes it easier to escape from local modes. For the simplest example where
is a
matrix and the hamming distance is defined the the number of different elements each column, if we set
then we can potentially change all elements of
in one update. But when
is large, the algorithm complexity also increases.
In this paper the authors provided several numerical examples comparing the Hamming Ball Sampler with block Gibbs Samplers. In the fHMM examples (Figure 4 in the paper) we can see that HBS with or
achieves having joint posterior density much faster than the block Gibbs Samplers. They also conclude that HB-2 is best balances computational time and sampling efficiency.
Reference:
Titsias, M. K., & Yau, C. (2017). The Hamming ball sampler. Journal of the American Statistical Association, 112(520), 1598-1611.
Ghahramani, Z., & Jordan, M. I. (1996). Factorial hidden Markov models. In Advances in Neural Information Processing Systems (pp. 472-478).