Open Access iconOpen Access

ARTICLE

crossmark

A Resource-Efficient Convolutional Neural Network Accelerator Using Fine-Grained Logarithmic Quantization

by Hadee Madadum*, Yasar Becerikli

Department of Computer Engineering, Kocaeli University, Kocaeli, 41380, Turkey

* Corresponding Author: Hadee Madadum. Email: email

Intelligent Automation & Soft Computing 2022, 33(2), 681-695. https://doi.org/10.32604/iasc.2022.023831

Abstract

Convolutional Neural Network (ConNN) implementations on Field Programmable Gate Array (FPGA) are being studied since the computational capabilities of FPGA have been improved recently. Model compression is required to enable ConNN deployment on resource-constrained FPGA devices. Logarithmic quantization is one of the efficient compression methods that can compress a model to very low bit-width without significant deterioration in performance. It is also hardware-friendly by using bitwise operations for multiplication. However, the logarithmic suffers from low resolution at high inputs due to exponential properties. Therefore, we propose a modified logarithmic quantization method with a fine resolution to compress a neural network model. In experiments, quantized models achieve a negligible loss of accuracy without the need for retraining steps. Besides this, we propose a resource-efficient hardware accelerator for running ConNN inference. Our design completely eliminates multipliers with bit shifters and adders. Throughput is measured in Giga Operation Per Second (GOP/s). The hardware utilization efficiency is represented by GOP/s per block of Digital Signal Processing (DSP) and Look-up Tables (LUTs). The result shows that the accelerator achieves resource efficiency of 9.38 GOP/s/DSP and 3.33 GOP/s/kLUTs.

Keywords


Cite This Article

APA Style
Madadum, H., Becerikli, Y. (2022). A resource-efficient convolutional neural network accelerator using fine-grained logarithmic quantization. Intelligent Automation & Soft Computing, 33(2), 681-695. https://doi.org/10.32604/iasc.2022.023831
Vancouver Style
Madadum H, Becerikli Y. A resource-efficient convolutional neural network accelerator using fine-grained logarithmic quantization. Intell Automat Soft Comput . 2022;33(2):681-695 https://doi.org/10.32604/iasc.2022.023831
IEEE Style
H. Madadum and Y. Becerikli, “A Resource-Efficient Convolutional Neural Network Accelerator Using Fine-Grained Logarithmic Quantization,” Intell. Automat. Soft Comput. , vol. 33, no. 2, pp. 681-695, 2022. https://doi.org/10.32604/iasc.2022.023831



cc Copyright © 2022 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 2638

    View

  • 1485

    Download

  • 0

    Like

Share Link