Conjugate gradient-accelerated Gibbs sampler for "large n and large p" sparse Bayesian logistic regression

Probability Seminar

Aki Nishimura

Thursday, August 23, 2018 -
3:15pm to 4:15pm
Location: 
235 Physics

In a modern observational study based on healthcare databases, the number of observations is typically in the order of 10^5 ~ 10^6 and that of the predictors in the order of 10^4 ~ 10^5. Despite the large sample size, the data rarely provide enough information to reliably estimate such a large number of parameters. Sparse regression provides a potential solution to this problem. There is a rich literature on desirable theoretical properties of the Bayesian approach based on shrinkage prior. On the other hand, the development of scalable methods for the required posterior computation has largely been limited to the p >> n case. While shrinkage priors are designed to make the posterior amenable to Gibbs sampling, a major computational bottleneck still arises from the need to sample from a high-dimensional Gaussian distribution at each iteration. The closed form expression for the precision matrix $\Phi$ is available, but computing and factorizing such a large matrix is computationally expensive nonetheless. In this article, we present a novel algorithm to speed up this bottleneck based on the following observation: we can cheaply generate a random vector $b$ such that the solution of a linear system $\Phi \beta = b$ has the desired Gaussian distribution. An accurate solution of the linear system can then be found by the conjugate gradient algorithm with only a small number of the matrix-vector multiplications by $\Phi$, without ever explicitly inverting $\Phi$. We apply our algorithm to a large-scale observational study with n = 72,489 and p = 22,175, designed to assess the relative risk of intracranial hemorrhage from two alternative blood anti-coagulants. Our algorithm demonstrates an order of magnitude speed-up in the posterior computation.

Last updated: 2018/08/21 - 2:17pm