SQS: Bayesian DNN Compression through Sparse Quantized Sub-distributions
Abstract
Compressing large-scale neural networks is essential for deploying models on resource-constrained devices. Most existing methods adopt weight pruning or low-bit quantization individually, often resulting in suboptimal compression rates to preserve acceptable performance drops. We introduce a unified framework for simultaneous pruning and low-bit quantization via Bayesian variational learning (Sqs), which achieves higher compression rates than prior baselines while maintaining comparable performance. The key idea is to employ a spike-and-slab prior to inducing sparsity and model quantized weights using Gaussian Mixture Models (GMMs) to enable low-bit precision. In theory, we provide the consistent result of our proposed variational approach to a sparse and quantized deep neural network. Extensive experiments on compressing ResNet, BERT-base, Llama3, and Qwen2.5 models show that our method achieves higher compression rates than a line of existing methods with comparable performance drops.
Figure 2: Impact of different window strategies. For the compressed weight distributions in the first attention layer of the Llama3.2-1B model, Sqs with outlier-aware window strategy (left) yields better preservation of the full-precision weight distribution compared to the equal-sized window (middle). This advantage is also evident in the left tail region shown in (right).
Figure 3: Comparison of inference accuracy on the CIFAR-100 dataset using ResNet-18 (left) and ResNet-50 (right). Under the same number of Gaussian components, SQS with Bayesian averaging results in a smaller accuracy drop compared to using a greedy approach.
BibTeX
@article{ziyi_wang_2025,
title={SQS: Bayesian DNN Compression through Sparse Quantized Sub-distributions},
author={Ziyi Wang and Nan Jiang and Guang Lin and Qifan Song},
journal={Arxiv},
year={2025},
url={https://github.com/comeusr/SQS_private}
}