SRN – Racing Thompson by Zhou et al.

While browsing the accepted papers list of ICML 2018, I discovered this paper ‘Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors‘ by Zhou, Zhu, and Zhuo. Thompson sampling is a popular algorithm for exploration-exploitation tradeoff problems and is also known as Bayesian bandits. I decided to write my Sunday Reading Notes post on this paper because have been interested in the exploration-exploitation tradeoff for a while and explored this topic through Bayesian optimization and my WSDM’19 paper on sequential A/B testing.

Suppose we want to identify the best arm among K arms and we have some prior knowledge about their rewards \mu \sim \pi. Thompson sampling (TS) balances exploring unexplored arms getting rewards from arms already yielding high rewards by choosing the kth arm according to its the posterior probability of being the optimal arm P_{it} = \pi\left( \mu_i = \max_j \mu_j \right). The computational challenge is to compute the probabilities $P_{it}$. Because TS is often used as an online algorithm, efficient calculation of the posterior probabilities is very important. In the conjugate prior case, this calculation is done in O(K). With non-conjugate priors, I have seen in the literature that people using Markov Chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC). The authors recognize this probability as an expectation and propose to use an Importance Sampling (IS) step combined with Gumbel-Max trick, which transforms the sampling problem to an optimization problem, to sample k at time k according to probabilities \pi\left( \mu_i = \max_j \mu_j \right) =  \mathbb{E}_{\mu\sim \pi(\cdot|X(1:t))}\left[\mathbb{I}[\mu_k = \max_j {\mu_j}] \right] = \mathbb{E}_{\mu\sim B_t}\left[\mathbb{I}[\mu_k = \max_j {\mu_j}] \frac{\pi(\mu|X(1:t)}{B_t(\mu}\right].

The benefits of this IS step comes from flexibility to choose $B_t$ at each time step and also the authors leveraged the stopping rule of racing algorithms to deterime the number of IS samples needed to approximate the expectation.

The resulting algorithm, which combines benefits from Importance Sampling, Gumble-Max trick, and the racing algorithm, is proved to be (\delta,\sigma)-PAC, which is asymptotic good in the sense the total variance distance between the true value P_{it} and its estimate converges to zero.

Screen Shot 2018-11-11 at 3.47.24 PM.png
The regret of bandits with (b) Bernoulli bandits & non-conjugate prior and (c) Gaussian bandits & non-conjugate prior. Source: Figure 2 from Zhou et al.

What I find very interesting from the regret analysis section is the fact that the racing TS in this paper can provide much lower regret compared to Thompson sampling and prior-swapping (PS) even though it uses much few particles than SMC and PS. It is not intuitive to me why this should happen. But upon a little further investigation, I found that the priors used for TS and PS & Racing are different in both plots. For ease of implementation, the authors have chosen a conjugate prior for TS. This leaves me wondering what the results would be if we were to use MCMC or SMC with more particles as the baseline for the regret analysis.

References:

  • Zhou, Y., Zhu, J. & Zhuo, J.. (2018). Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors. Proceedings of the 35th International Conference on Machine Learning, in PMLR 80:6000-6008

SRN – Scalable Bayesian Inference for Excitatory Point Process Networks by Linderman and Adams

The topic of this week’s SRN is latent network discovery. The paper ‘Scalable bayesian inference for excitatory point process networks‘ by Scott Linderman and Ryan Adams is suggested to me by Gonzalo Mena.

example_hawkes_poisson.png
Hawkes process v.s. Poisson process, by Steven Mores. Source: https://stmorse.github.io/journal/Hawkes-python.html.

 

In a multivariate Hawkes process, there are K individual point processes and events on one constituent process are more likely to happen right after an event from a neighboring process. To put it in other words, events are ‘induce’ subsequent events according to a network describing their interactions. If we imagine a network of neurons, if neuron A and B are connected and when neuron A fires it can excite neuron B and neuron B has a higher chance of firing. When we want to infer connectivity, if neuron B consistently after neuron A fires, we might  infer that there is likely an excitatory connection between them.

In the work by Linderman and Adams, they extend previous works on continuous time Hawkes process and study the discrete time network Hawkes model. This discretization is motivated by computational concerns. In Simma and Jordan (2010), the authors invoke auxiliary variable z_{k,n} that which denotes the origin of the n-th event on process k. So the number of auxiliary variables needed is large for highly active networks, which could be a computational bottleneck. But working in the discrete model assuming a time scale \Delta t, we can ‘bin’ events and ignore interactions within the same bin. This reduction in number of auxiliary variables improves the run time of inference algorithms such as Gibbs sampler.

Screen Shot 2018-11-04 at 4.43.49 PM.png
Comparison of run time per Gibbs sweep. Source: Fig 2 of Linderman and Adams (2015) https://arxiv.org/pdf/1507.03228.pdf

Besides Gibbs sampler, Linderman and Adams also derived stochastic variational inference algorithm for the discrete time network Hawkes model. This work focuses on scaling to long duration T and the problem of scaling with size of network K remains open.

References:

  • Linderman, S. W., & Adams, R. P. (2015). Scalable bayesian inference for excitatory point process networks. arXiv preprint arXiv:1507.03228.
  • Linderman, S., & Adams, R. (2014, January). Discovering latent network structure in point process data. In International Conference on Machine Learning (pp. 1413-1421).
  • Simma, A., & Jordan, M. I. (2012). Modeling events with cascades of Poisson processes. arXiv preprint arXiv:1203.3516.
  • Steven Morse (2017, June). Python class for Hawkes Process. [Blog post] https://stmorse.github.io/journal/Hawkes-python.html.

SRN – Bayesian Calibration of Microsimulation Models by Rutter et al.

The Sunday Reading Notes paper for this week is ‘Bayesian Calibration of Microsimulation Models’ by Carolyn Rutter, Diana Miglioretti and James Savarino. This is a 2009 JASA Applications and Case Studies paper.

According to a 2012 Review paper by Rutter et al,

Microsimulation models (MSMs) for health outcomes simulate individual event histories associated with key components of a disease process; these simulated life histories can be aggregated to estimate population-level effects of treatment on disease outcomes and the comparative effectiveness of treatments. Although MSMs are used to address a wide range of research questions, methodological improvements in MSM approaches have been slowed by the lack of communication among modelers. In addition, there are few resources to guide individuals who may wish to use MSM projections to inform decisions.

In this paper, the authors propose a Bayesian method to calibrate microsimulations models, using Markov Chain Monte Carlo. The case study in this paper is the history of colorectal cancer (CRC). In this paper, the authors assume all CRCs arise from an adenoma and the history of CRC consists of four components: 1) adenoma risk, 2) adenoma growth, 3) transition from adenoma to preclinical cancer and 4) transition from preclinical cancel to clinical cancer. These four components are not observed directly and the calibration data consists of prevalence of adenomas and preclinical cancers and the size and/or number of adenomas from many (independent) studies from different years and about different subpopulations and using different colonoscopy methods.

The authors use \theta to denote MSM parameters and these parameters can be separated from K components with independent priors: \pi(\theta) = \prod_{i=1}^k \pi_i(\theta_i). The calibration data come from N independent sources and each follows some distribution  y_j|\theta \sim f_j(g_j(\theta)) parametrized by some unknown functions of the MSM parameters. The likelihood is therefore f(Y | \theta) = f(Y|g(\theta) )= \prod_{j=1}^n f_j(y_j|g_j(\theta)).

What makes the calibration problem hard is the unknown function g(\theta). Suppose we want to simulate from the posterior distribution of \theta|Y using a Metropolis-Hastings(MH) algorithm, we need to know r(\theta,\theta^*) = \frac{\pi(\theta)f(y|g(\theta)}{\pi(\theta^*)f(y|g(\theta^*)}. But because g is unknown, the MH step cannot be performed.

So the authors propose an approximate MH algorithm that includes a step to estimate $g(\theta)$ for both the current value \theta and the proposed value \theta^*. To me this feels like an ‘EM’ step: simulate m copies of \tilde{y}_{ji} from f_j(y_j| g_j(\theta)) and calculate the MLE \hat{g}_j(\theta). With this approximation, the resulting transition probability function \hat{\alpha}(\theta,\theta^*)for the Metropolis-within-Gibbs step on \theta_i is based on \frac{\pi(\theta_i)}{\pi(\theta_i^*)}\cdot \frac{\prod_{j=1}^N f_j(y_j|\hat{g}_j(\theta^*_i, \theta_{(-i)})}{\prod_{j=1}^n f_j(y_j|\hat{g}_j(\theta))}. In the Appendix, the authors prove that this approximation satisfies the detailed-balanced condition in the limit of m goes to infinity.

I think this paper provides an interesting example of how to incorporate data from multiple sources. As the authors point out,

how closely the model should calibrate to observed data is unclear, especially when calibration data are variable and many provide conflicting interest. […] It depends on how modelers trade-off concerns about possibly overparameterizing and overfitting calibration data relative to the importance of exactly replicating observed or expected results.

 

References:

  • Rutter, C. M., Miglioretti, D. L., & Savarino, J. E. (2009). Bayesian calibration of microsimulation models. Journal of the American Statistical Association104(488), 1338-1350.
  • Rutter, C. M., Zaslavsky, A. M., & Feuer, E. J. (2011). Dynamic microsimulation models for health outcomes: a review. Medical Decision Making31(1), 10-18.

SRN – A Geometric Interpretation of the Metropolis-Hastings Algorithm by Billera and Diaconis

Coming back to the Sunday Reading Notes, this week I discuss the paper ‘A Geometric Interpretation of the Metropolis-Hastings Algorithm’ by Louis J. Billera and Persi Diaconis from Statistical Science. This paper is suggested to me by Joe Blitzstein.

In Section 4 of ‘Informed proposals for local MCMC in discrete spaces’ by Giacomo Zanella (see my SRN Part I and II), Zanella mentions that the Metropolis-Hasting acceptance probability function(APF) \min\left(1,\frac{\pi(y)p(x,y)}{\pi(x)p(y,x)}\right) is not the only APF that makes the resulting kernel \pi-reversible as long as detailed-balance is satisfied. This comes first as a ‘surprise’ to me as I have never seen another APF in practice. But very quickly I realize that this fact was mentioned in both Stat 213 & Stat 220 at Harvard and I have read about it from Section 5.3 – ‘Why Does the Metropolis Algorithm Work?‘ of ‘Monte Carlo Strategies in Scientific Computing‘ by Jun S. Liu. Unfortunately, I did not pay enough attention. Joe suggested this article to me after I posted on Facebook about being upset with not knowing such a basic fact.

In this Billera and Diaconis paper, the authors focus on the finite state space case X and considers the MH kernel as the projection of stochastic matrices (row sums are all 1 and all entries are non-negative, denoted by\mathcal{s}(X)) onto the set of \pi-reversible Markov chains (stochastic matrices that satisfy detailed balance \pi(x)M(x,y) = \pi(y)M(y,x), denoted by R(\pi)). If we introduce a metric on the stochastic matrices: d(K,K') = \sum_{x} \sum_{x\not=y} \pi(x) |K(x,y)-K'(x,y)|.

The key result in this paper is Theorem 1. The authors prove that the Metropolis maps M := M(K)(x,y) = \min\left( K(x,y), \frac{\pi(y}{\pi(x)}K(y,x)\right) minimizes the distance d from the proposal kernel K to R(\pi). This means that M(K) is the unique closest element in R(\pi) that is coordinate-wise smaller than K on its off-diagonal entries. So M is in a sense the closest reversible kernel to the original kernel K.

I think this geometric interpretation offers great intuition about how the MH algorithm works: we start with a kernel K and change it to another kernel with stationary distribution \pi. And the change must occur as follows:

from x, choose y from K(x,y) and decide to accept x or stay at y; this last choice may be stochastic with acceptance probabilty F(x,y) \in [0,1]. This gives the new chain with transition probabilities: K(x,y) F(x,y), x \not =y$. The diagonal entries are changed so that each row sums to 1.

Indeed the above procedure describes how the MH algorithm works. If we insist on \pi-reversibility, we must have 0 \leq 0 \leq \min(1,R(x,y) where R(x,y) =  \frac{\pi(y)K(y,x)}{\pi(x)K(x,y)}. So the MH choice of APF is one that maximizes the chance of moving from x to y. The resulting MH kernel M has the largest spectral gap (1 – second largest eigenvalue) and by Peksun’s theorem must have the minimum asymptotic variance estimating additive functionals.

In Remark 3.2, the authors point out if we consider only APF that are functions of R(x,y), then the function must satisfy g(x) = x g(1/x) which is the characteristic of balancing functions in Zanella’s ‘informed proposals’ paper.

This paper allows me to study Metropolis-Hastings algorithm from another angle and review facts I have neglected in my coursework.

References:

  • Billera, L. J., & Diaconis, P. (2001). A geometric interpretation of the Metropolis-Hastings algorithm. Statistical Science, 335-339.
  • Zanella, G. (2017). Informed proposals for local MCMC in discrete spaces. arXiv preprint arXiv:1711.07424.
  • Liu, J. S. (2008). Monte Carlo strategies in scientific computing. Springer Science & Business Media.

SRN – Informed proposals for local MCMC in discrete spaces by Zanella (Part II)

This is a super delayed Sunday Reading Notes post and today I come back to finish discussing the informed proposals paper by Giacomo Zanella. In Part I of my discussions, I reviewed point-wise informed proposals and locally-informed criterion. In Part II, I hope to focus on asymptotical optimality of locally informed proposals and their simulation studies.

The author uses Peskun ordering to compare efficiency of MH schemes. He deduces that ‘locally-balanced proposals are asymptotically optimal in terms of Peskun ordering’ as dimensionality of the underlying state space increases. Conditions for asymptotic optimality are indeed mild (Proposition 1) for the three illustrative examples: 1) independent binary components, 2) weighted permutation and 3) Ising model.

In Section 4, Zanella points out a connection between the balancing function and acceptance probability function (APF) for MH algorithms, which I find very interesting. He also shows that the optimal proposal for independent binary variables is Barker choice g(t) = \frac{t}{1+t}. The proof goes by finding the limiting continuous-time process of the MH chain and finding the optimal g for the limiting process.

The simulation studies use the illustrative examples: weighted permutations and Ising models. The comparisons are in terms of 1) acceptance rate, 2) number of successful flips per computation time and 3) mixing of some summary statistics. The second criterion concerns the trade-off between computational cost (of calculating the informed proposal) and statistical accuracy (by producing efficient moves). For simple target distributions (such as Uniform), using locally-balanced proposals does not bring much benefits but it achieves a much higher number of flips per unit time for more complicated and ‘rough’ targets. See second row of Figure 1 of the paper.

Screen Shot 2018-10-14 at 5.37.28 PM

I find it interesting that globally-balanced proposals (aka ‘naively-informed’ or g(t) = t) are extremely sensitive to initialization. Looking at the effective sample size (ESS) per time, the chains from GB has much more stable behavior if initialized from stationarity. See GB v.s. GB( station.) in Figure 2. But in Figure 5, this phenomenon does not show up for Ising models: initializing from stationarity does not yield ESS performance comparable to that of LBs.

Screen Shot 2018-10-14 at 5.44.23 PM.png

In the simulation studies section, the author emphasizes the cost-vs-efficiency trade-off, which I find very important. I feel I have ignored this aspect of designing MCMC algorithms and should think more about it in my future studies. The author indicates that ‘computations required to sample from locally-balanced proposals are trivially parallelizable’. This is also something very interesting to me and I hope to learn more about multi-core computations during my PhD.

In his discussions, the author makes reference to Multiple-Try Metropolis. The connection between LB proposals and MT proposals is not entirely obvious to me, but I intuitively agree with the author’s comment that the weight function in MT serves a very similar purpose as the balancing function term g\left(\frac{\pi(y)}{\pi(x)}\right) in LB.

Side note:
From my downloaded version of the paper, the transition kernel of the first equation on Page 1 is wrong, because the Q(x,dy) terms needs to be multiplied by its acceptance probability a(x,y).

References:

  •  Zanella, G. (2017). Informed proposals for local MCMC in discrete spaces. arXiv preprint arXiv:1711.07424.
  • Liu, J. S., Liang, F., & Wong, W. H. (2000). The multiple-try method and local optimization in Metropolis sampling. Journal of the American Statistical Association95(449), 121-134.

SRN – Informed proposals for local MCMC in discrete spaces by Zanella (Part I)

This week I am reading ‘Informed proposals for local MCMC in discrete spaces‘ by Giacomo Zanella. This paper is about designing MCMC algorithms for discrete-values high-dimensional parameters, and the goal is similar to the papers discussed in previous posts (Hamming ball sampler & auxiliary-variable HMC). I decide to split the Sunday Reading Notes on this paper into two parts, because I find many interesting ideas in this paper.

In this paper, Zanella come up with locally-balanced proposals. Suppose \pi(x) is the target density and K_{\sigma}(x,dy) is an uninformed proposal. We assume that as \sigma \to 0 the kernel K_{\sigma}(x,dy) converges to the delta measure. Zanella seeks to modify this uninformed proposal so that it incorporates information about the target \pi and is biased towards areas with higher density. An example of locally-balanced proposals is Q_{\sqrt{\pi}} (x,dy) = \frac{\sqrt{\pi(y) }K_{\sigma}(x,dy)}{(\sqrt{\pi} * K_{\sigma})(x)}. This kernel is reversible with respect to \sqrt{\pi(x)}(\sqrt{\pi} * K_{\sigma})(x), which converges to \pi(x)dx as x \to 0. [Note the normalizing constatn is the convolution \sqrt{\pi(x)}* K_{\sigma} = \int \sqrt{\pi(y)} K_{\sigma}(x,dy)].]

More generally, Zanella considers a class of pointwise informed proposals that has the structure Q_{g,\sigma} = \frac{1}{Z_{g}}\cdot g\left(\frac{\pi(y)}{\pi(x)}\right) K_{\sigma}(x,dy). It is suggested that the function g satisfy g(t) = t g(1/t).

I will save the discussion on locally-balanced proposals and Peskun optimality to Part II. In this part, I want to discuss Section 5: Connection to MALA and gradient-based MCMC. In continuous space, the point-wise informed proposal Q_{g,\sigma} would be infeasible to sample from because of the term g\left(\frac{\pi(y)}{\pi(x)}\right) . If we take a first-order Taylor expansion, we would have Q_{g,\sigma}^{(1)} \propto g \left( \exp ( \nabla  \log \pi(x) (y-x)) \right) K_{\sigma}(x,dy). If we choose g(t) = \sqrt{t} and K_{\sigma}(x,\cdot) =N(x,\sigma^2), this is the MALA proposal.

I find this connection very interesting, although I do not have a good intuition about where this connection comes from. One way to explain it is that gradient-based MCMC in continuous space is using local information to design informed proposals. In the conclusions, the author mentions that this connection should improve robustness of gradient-based MCMC schemes and help with parameter tuning.

References:(x)

  •  Zanella, G. (2017). Informed proposals for local MCMC in discrete spaces. arXiv preprint arXiv:1711.07424.

SRN – Auxiliary-variable Exact HMC samplers for Binary Distributions by Pakman and Paninski

It’s time for Sunday Reading Notes again! This week I am discussing another computational statistics paper: ‘Auxiliary-variable Exact Hamiltonian Monte Carlo Samplers for Binary Distributions’ by Ari Pakman and Liam Paninski from Columbia University. This paper is published at NIPS 2013.

In the previous Hamming Ball sampler SRN post, the algorithm uses data augmentation to sample from discrete distributions. In this week’s paper, the goal is to sample from generic binary distributions with data augmentation into continuous variables.

Let’s say we want to sample from the distribution p(s) defined over s \in \{\pm 1 \}^d given an un-normalized density f(s). The authors propose augmenting with a continuous variable y \in \mathbb{R}^d with joint density p(s,y) = p(s)p(y|s) where p(y|s) is a density we can design but it must satisfy s_i = \mathrm{sgn}(y_i) for all i = 1,\cdots,d. The marginal distribution of $y$ is $p(y) = p(s)p(y|s)$ as a result of this constraint. It turns out that at this point we transformed a d-dimentional binary  problem on s into a d-dimensional continuous problem on y.

To sample from y, the authors suggest using Hamiltonian Monte Carlo, the potential energy is U(y) = - \log p(y) = -\log p(y|s) - log f(s) and the kinetic energy terms is K(q) = <q,q>/2. The HMC sampler involves simulating a trajectory of y that preserves the Hamiltonian H(y,q) = U(y) + K(q) and typically leap-frog simulation is used. With the constraint in p(y|s), the potential function is defined only piece-wise and we need to be careful when the trajectory crosses regions. o this end, the authors insist we choose p(y|s) such that \log p(y|s) is quadratic, so that the trajectory is deterministic and approximations methods are not necessary.

Because U(y) has a jump at y_i = 0, the value of momentum q_i should change when we cross boundaries. This is, in my opinion, the most interesting part of the paper. Suppose at time t_j we have y_j = 0, then a change in trajectory must happen and let’s say the momentum just before and after y_j = 0 are q_j(t_j^-) and q_j(t_j^+). Conservation of energy says we must have q_j^2(t_j^+)/2+ U(y_j = 0, s_j = -1) = q_j^2(t_j^-)/ 2+ U(y_j = 0, s_j = +1) if y_j <0 before the y_j = 0. From this equation, if q_j^2(t_j^+)>0 then we continue the trajectory with q_j(t_j^+) =q_j(t_j^-); however, if q_j^2(t_j^+)<0  then the particle is reflected from a wall at y_j = 0 and the trajectory gets reflected with q_j(t_j^+) = - q_j(t_j^-).

The two augmentations mentioned in this paper are Gaussian augmentation and exponential augmentation. Both results in quadratic log likelihood. The Gaussian augmentation is very interesting because there is a fixed order that each coordinate y_j reaches zero and the successive hits occur at t_j + n \pi. The authors makes an observation that:

Interestingly, the rate at which wall y_j = 0 is crossed coincides with the acceptance rate in a Metropolis algorithm that samples uniformly a value for i and makes a proposal of flipping the binary variable s_i.

To me this is a sanity check rather than a surprise because each coordinate hits the boundary the same number of times and making a decision to continue or to bounce back in y_j is the same as deciding whether we should flip the sign of s_i. But I think the authors give a very help comment pointing out that although the acceptance probability is the same,  the method proposed is still different from Metropolis because

in HMC the order in which the walls are hit is fixed given the initial velocity, and the values of q_i^2 at successive hits of y_i = 0 within the same iteration are not independent.

What’s interesting for the exponential augmentation method is that

particles moves away faster from areas of lower probability.

This is certainly a nice feature to have so that the sample mixes well.

In the simulation examples, the authors compared Gaussian HMC and Metropolis on 1d and 2d ising models and showed that:

  1. ‘the HMC sampler explores faster the samples space once chain has reached equilibrium distribution.’
  2. ‘the HMC sampler is faster in reaching the equilibrium distribution.’

I think the take away from this paper is the continuous data augmentation to sample discrete variables and their dealing with piece-wise defined potential function.

 

Reference:

  • Pakman, A., & Paninski, L. (2013). Auxiliary-variable exact Hamiltonian Monte Carlo samplers for binary distributions. In Advances in neural information processing systems (pp. 2490-2498).
  •  Neal, R. M. (2011). MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo2(11), 2.

SRN – The Bayesian Lasso by Park and Casella

‘The Bayesian Lasso’ paper by Park and Casella has been on my desk for a while. Thanks to the Sunday Reading Notes (SRN) series, I finally sit down to read and think about this paper.

Lasso (Least absolute shrinkage and selection operator) is a regression method where comparing to ordinary least squares some estimators shrinks to zero while others are set to zero. Lasso is able to do both variable selection (done with step-wise regression before Lasso) and shrinkage to avoid overfitting (for example with ridge regression) at the same time. The objective function for Lasso is \min_{\beta} (y-X\beta)^T(y-X\beta) + \lambda \sum_{j=1}^p |\beta_j|. In the 1996 paper, Tibshirani points out that Lasso estimates can be interpreted as posterior mode estimates (MAP) when we put i.i.d. Lapalace priors (\pi(\beta) \propto \prod_{j=1}^p \exp(-\lambda |\beta_j|).

When I took the Bayesian Data Analysis class (STAT 220 at Harvard), our textbook BDA3 contains an exercise on Lasso regularization and it claims that ‘a fully Bayesian lasso is the same model as lasso but assigning a hyperprior to \lambda. ‘ But ‘The Bayesian Lasso’ paper sets up the hierarchy with a different route:

graphicalmodel

The full conditional distributions on \beta, \sigma^2 and \vec{\tau^2} have closed-form solutions and are provided in Section 2 of the paper. In Section 3, the authors discuss ways of choosing the Lasso parameter: 1) Empirical Bayes by marginal maximum likelihood: iteratively update \lambda^{(k)} = \sqrt{\frac{2p}{\sum_{j=1}^p \mathbb{E}_{\lambda^{(k-1)}} [\tau_j^2|y] }}; 2) hyper-priors on \lambda^2.

The examples in this paper come from the diabetes dataset from the Tibshirani 1996 paper and can be found in the ‘lars‘ R package. On this dataset the results from a Bayesian lasso are very similar to a regular Lasso, as seen in Figure 1 & 2 from the paper or the plot below where I borrowed code from the Rdocumentation of the ‘monomvn‘ R package.

BayesianLassoVisual

In the plot above, we can see the shrinkage effect and the variable selection happening. Compared to ordinary least squares (green), the both the lasso estimates and samples from Bayesian lasso shrink towards zero (except for b.3) and the lasso estimate fixes some parameters (b.1,2,5,6,8,10) at zero. Comparing to Lasso where we do not have uncertainty estimates unless with bootstrap, the Bayesian lasso is providing these uncertainty estimates.

References:

Hastie, T., & Efron, B. (2013). lars: Least angle regression, lasso and forward stagewise, 2013.

Gramacy, R. B., & Gramacy, M. R. B. (2013). Package ‘monomvn’.

Park, T., & Casella, G. (2008). The bayesian lasso. Journal of the American Statistical Association103(482), 681-686.

Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267-288.

SRN – The Hamming Ball Sampler by Titsias and Yau

My Sunday Reading Notes (SRN) this semester will mostly be about Bayesian Computations. This week’s post is on The Hamming Ball Sampler proposed by Titsias and Yau.The hamming ball sampler is a MCMC algorithm for high-dimensional discrete-valued vectors or matrices.  While reading this paper, I also found a blog post about it from Xi’an’s OG, which provided some high-level intuitions and background knowledge.

The paper considers a state space model with discrete hidden space X with parameters \theta and observations y.  Factorial Hidden Markov Model (fHMM) is an example of such a model. In state space models,  the complete data likelihood can be factorized with p(y,X,\theta) =  p(X,\theta) \prod_{i=1}^N p(y_i|X,\theta). Given some prior, we want to sample from the posterior distribution X,\theta | y.

When the dimension of X is large, we would suffer from ‘the curse of dimensionality’. Using a Gibbs sampler, we can iteratively sample \theta  \sim \cdot | X,y and $\theta X \sim \cdot | \theta,y$. Because the dimension of $X$ is high, we should also consider blocked Gibbs sampling on X by for example updating one row (or column) of X at a time. While this is conceptually straightforward and potentially also easy to implement, as the authors pointed out:

Conditional sampling may lead to an inability to escape from local modes in the posterior distribution particularly if the elements of X exhibit strong correlations with each other and together with \theta.

The Hamming Ball Sampler (HBS) introduces an auxiliary variable U that has the same dimension as the latent space X. The augmented joint probability can be factorized with as p(y,X,\theta,U) = p(U|X) p(y,X,\theta). The conditional distribution p(U|X) is chosen to be uniform over a neighborhood set \mathcal{H}_m(X). This set \mathcal{H}_m(X) is a Hamming Ball and it basically says that if U,X are K \times N matrices, then U and X can be different on at most m positions among the K rows. With the auxiliary variable U, the Hamming Ball Sampler alternate between the steps U \gets p(U|X) and (\theta, X) \gets p(\theta,X|U,y).

The Hamming Ball Sampler is like slice-sampling in discrete spaces, and each Hamming Ball \mathcal{H}_m(X) is a slice. Introducing the slice introduces random exploration, and makes it easier to escape from local modes. For the simplest example where X is a K \times N matrix and the hamming distance is defined the the number of different elements each column, if we set m = K/2 then we can potentially change all elements of X in one update. But when m is large, the algorithm complexity also increases.

In this paper the authors provided several numerical examples comparing the Hamming Ball Sampler with block Gibbs Samplers. In the fHMM examples (Figure 4 in the paper) we can see that HBS with m = 2 or 3 achieves having joint posterior density much faster than the block Gibbs Samplers. They also conclude that HB-2 is best balances computational time and sampling efficiency.

Reference:

Titsias, M. K., & Yau, C. (2017). The Hamming ball sampler. Journal of the American Statistical Association112(520), 1598-1611.

Ghahramani, Z., & Jordan, M. I. (1996). Factorial hidden Markov models. In Advances in Neural Information Processing Systems (pp. 472-478).

SRN – Should I follow the crowd? by Canamares and Castells from SIGIR’18

It’s time for Sunday Reading Notes again. This past week I have been reading the paper ‘Should I Follow the Crowd? A Probabilistic Analysis of the Effectiveness of Popularity in Recommender Systems‘ by Rocio Canamares and Pablo Castells. It won the best paper award at SIGIR 2018!

Current state of the art recommender systems tend to recommend popular items to their users. As the paper points out in its abstract:

The fundamental questions remains open though whether popularity is really a bias we should avoid or not, whether it could be a useful and reliable signal in recommendation, or it may be unfairly rewarded by the experimentation biases.

This paper is concerned with un-personalized recommendations evaluated in off-line experiments. The dataset consists of rated items only and is randomly split into training and test. The ratings takes binary values (positive rating / negative rating). For every item, popularity is defined in terms of number of positive ratings, which is different from average rating. In order to make recommendations, we want to measure ‘relevance‘, which is a binary random variable and res(i) = 1 if the user likes item i. From this set-up, we have pop(i) \sim p(rel,rate|i) and avg(i) \sim p(rel|rate,i). But we cannot directly measure p(rel|i).

The author uses the metric ‘expected precision’ as the metric for recommendations and emphasizes that true precision (cannot be measured from offline experiments) is different from observed precision (measured with rating from the test set). The authors find the optimal ranking function for both true precision and observed precision. Using relevance optimizes the true precision : f(i) = p(rel|i)\frac{1-\rho p(rated|rel,i)}{1-\rho p(rated|i)} is the optimal ranking function for \mathbb{E}[P@1|f], and as a result of experimental design the expected observed precision \mathbb{E}[\hat{p}@1|f] is optimized by \hat{f}(i) \sim p(rel|i)\frac{p(rated|rel,i)}{1-\rho p(rated|i)}. As we can see, popularity has advantage in optimizing the expected observed precision.

Because customers have a tendency to rate items that they like, the rating information is not missing at random. To understand rated given relevance and item, the authors gives two simple conditions: conditional item independence and conditional relevance independence. Under both conditions, using pop(i) optimizes the expected observed precision and using avg(i) optimized the expected true precision.

Because we can only measure the expected observed precision with offline experiments and because popularity has advantage optimizing \mathbb{E}(\hat{P}@1), the current recommender systems that are based on offline evaluations favors popularity. Although not all recommender systems are designed the same way as described in this paper, I believe this elegant framework provides intuitions because the ‘popularity of popularity’.

Acknowledging that a user can rate an item only if he or she has bought the item, the authors further consider ‘discovery bias’, because rating depends on all of relevance, discovery and item. This dependencies are characterized in Figure 2 from the paper.

Screen Shot 2018-08-26 at 4.37.58 PM.png

In a realistic senario, our data should fall into category 4 – ‘no assumption’. In this case the expected precision can be approximated with Monte Carlo integration.

The key results and experiment are summaries in Table 1 and Figure 5 in the paper.

Screen Shot 2018-08-26 at 4.42.49 PMScreen Shot 2018-08-26 at 4.42.26 PM

Besides the probability based framework, what I really like about this paper is that the authors designed and collected a crowdsourced dataset that makes real data experiments of relevance-independent discovery and item-independent discovery possible. After reading going through the math and the experiments, I feel quite convinced that

average rating may be in fact a better, safer, more robust signal than the number of positive ratings in terms of true achieved accuracy in most general situations.

Although the metric \mathbb{E}(P@1|R) seems too simple because essentially it is like we can only recommend one item to a user, it is a good and tractable measure to start with. The authors suggest that it empirically consistently generalize other metric such as nDCG@k. However I am not sure how much I would agree with this point, because nDCG@k cares much beyond the top ranked item.

Overall I really like this paper and I think it touches many fundamental problems in popularity bias and provides enough mathematical clairty. I wonder if this paper suggests we should do more online experiments for recommender systems because true accuracy cannot be measured with offline experiments. I am also eager to see what the author has to say about temporal data splitting. Lastly I hope the authors talk about what we should do with the very common ‘5-star’ rating system.

 

Reference:

  1. Rocío Cañamares and Pablo Castells. 2018. Should I Follow the Crowd? A Prob- abilistic Analysis of the Effectiveness of Popularity in Recommender Systems. In Proceedings of ACM SIGIR ’18, July 8–12, 2018, Ann Arbor, MI, USA.
    ACM, NY, NY, USA, 10 pages. https://doi.org/10.1145/3209978.3210014