Efficient posterior sampling for high-dimensional imbalanced logistic regression

Deborshee Sen, Matthias Sachs, Jianfeng Lu, David B. Dunson

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

Classification with high-dimensional data is of widespread interest and often involves dealing with imbalanced data. Bayesian classification approaches are hampered by the fact that current Markov chain Monte Carlo algorithms for posterior computation become inefficient as the number p of predictors or the number n of subjects to classify gets large, because of the increasing computational time per step and worsening mixing rates. One strategy is to employ a gradient-based sampler to improve mixing while using data subsamples to reduce the per-step computational complexity. However, the usual subsampling breaks down when applied to imbalanced data. Instead, we generalize piecewise-deterministic Markov chain Monte Carlo algorithms to include importance-weighted and mini-batch subsampling. These maintain the correct stationary distribution with arbitrarily small subsamples and substantially outperform current competitors. We provide theoretical support for the proposed approach and demonstrate its performance gains in simulated data examples and an application to cancer data.

Original languageEnglish
Pages (from-to)1005-1012
Number of pages8
JournalBiometrika
Volume107
Issue number4
DOIs
Publication statusPublished - 1 Dec 2020

Bibliographical note

Publisher Copyright:
© 2020 Biometrika Trust.

Keywords

  • Imbalanced data
  • Logistic regression
  • Piecewise-deterministic Markov process
  • Scalable inference
  • Subsampling

ASJC Scopus subject areas

  • Statistics and Probability
  • General Mathematics
  • Agricultural and Biological Sciences (miscellaneous)
  • General Agricultural and Biological Sciences
  • Statistics, Probability and Uncertainty
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Efficient posterior sampling for high-dimensional imbalanced logistic regression'. Together they form a unique fingerprint.

Cite this