SQS: Bayesian DNN Compression through Sparse Quantized Sub-distributions

1Purdue University, IN, USA. 2University of Texas at El Paso, TX, USA.
Emails: {wang4538,guanglin, qfsong}@purdue.edu, njiang@utep.edu
Main diagram

Figure 1: Our SQS achieves high compression and minimal performance degradation by jointly pruning and quantizing model weights through variational learning. We use a spike-and-GMM variational distribution to approximate full-precision weights: the spike component promotes sparsity for pruning, while the slab component (i.e., GMM) models a quantized weight distribution.

Abstract

Compressing large-scale neural networks is essential for deploying models on resource-constrained devices. Most existing methods adopt weight pruning or low-bit quantization individually, often resulting in suboptimal compression rates to preserve acceptable performance drops. We introduce a unified framework for simultaneous pruning and low-bit quantization via Bayesian variational learning (Sqs), which achieves higher compression rates than prior baselines while maintaining comparable performance. The key idea is to employ a spike-and-slab prior to inducing sparsity and model quantized weights using Gaussian Mixture Models (GMMs) to enable low-bit precision. In theory, we provide the consistent result of our proposed variational approach to a sparse and quantized deep neural network. Extensive experiments on compressing ResNet, BERT-base, Llama3, and Qwen2.5 models show that our method achieves higher compression rates than a line of existing methods with comparable performance drops.

BibTeX

@article{ziyi_wang_2025,
  title={SQS: Bayesian DNN Compression through Sparse Quantized Sub-distributions},
  author={Ziyi Wang and Nan Jiang and Guang Lin and Qifan Song},
  journal={Arxiv},
  year={2025},
  url={https://github.com/comeusr/SQS_private}
}