Open Access iconOpen Access

ARTICLE

crossmark

A Novel Quantization and Model Compression Approach for Hardware Accelerators in Edge Computing

Fangzhou He1,3, Ke Ding1,2, Dingjiang Yan3, Jie Li3,*, Jiajun Wang1,2, Mingzhe Chen1,2

1 State Key Laboratory of Intelligent Vehicle Safety Technology, Chongqing, 401133, China
2 Foresight Technology Institute, Chongqing Changan Automobile Co., Ltd., Chongqing, 400023, China
3 School of Computer Science and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, China

* Corresponding Author: Jie Li. Email: email

Computers, Materials & Continua 2024, 80(2), 3021-3045. https://doi.org/10.32604/cmc.2024.053632

Abstract

Massive computational complexity and memory requirement of artificial intelligence models impede their deployability on edge computing devices of the Internet of Things (IoT). While Power-of-Two (PoT) quantization is proposed to improve the efficiency for edge inference of Deep Neural Networks (DNNs), existing PoT schemes require a huge amount of bit-wise manipulation and have large memory overhead, and their efficiency is bounded by the bottleneck of computation latency and memory footprint. To tackle this challenge, we present an efficient inference approach on the basis of PoT quantization and model compression. An integer-only scalar PoT quantization (IOS-PoT) is designed jointly with a distribution loss regularizer, wherein the regularizer minimizes quantization errors and training disturbances. Additionally, two-stage model compression is developed to effectively reduce memory requirement, and alleviate bandwidth usage in communications of networked heterogenous learning systems. The product look-up table (P-LUT) inference scheme is leveraged to replace bit-shifting with only indexing and addition operations for achieving low-latency computation and implementing efficient edge accelerators. Finally, comprehensive experiments on Residual Networks (ResNets) and efficient architectures with Canadian Institute for Advanced Research (CIFAR), ImageNet, and Real-world Affective Faces Database (RAF-DB) datasets, indicate that our approach achieves 2×∼10× improvement in the reduction of both weight size and computation cost in comparison to state-of-the-art methods. A P-LUT accelerator prototype is implemented on the Xilinx KV260 Field Programmable Gate Array (FPGA) platform for accelerating convolution operations, with performance results showing that P-LUT reduces memory footprint by 1.45×, achieves more than 3× power efficiency and 2× resource efficiency, compared to the conventional bit-shifting scheme.

Keywords


Cite This Article

APA Style
He, F., Ding, K., Yan, D., Li, J., Wang, J. et al. (2024). A novel quantization and model compression approach for hardware accelerators in edge computing. Computers, Materials & Continua, 80(2), 3021-3045. https://doi.org/10.32604/cmc.2024.053632
Vancouver Style
He F, Ding K, Yan D, Li J, Wang J, Chen M. A novel quantization and model compression approach for hardware accelerators in edge computing. Comput Mater Contin. 2024;80(2):3021-3045 https://doi.org/10.32604/cmc.2024.053632
IEEE Style
F. He, K. Ding, D. Yan, J. Li, J. Wang, and M. Chen, “A Novel Quantization and Model Compression Approach for Hardware Accelerators in Edge Computing,” Comput. Mater. Contin., vol. 80, no. 2, pp. 3021-3045, 2024. https://doi.org/10.32604/cmc.2024.053632



cc Copyright © 2024 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 366

    View

  • 154

    Download

  • 0

    Like

Share Link