Hadee Madadum*, Yasar Becerikli
Intelligent Automation & Soft Computing, Vol.33, No.2, pp. 681-695, 2022, DOI:10.32604/iasc.2022.023831
- 08 February 2022
Abstract Convolutional Neural Network (ConNN) implementations on Field Programmable Gate Array (FPGA) are being studied since the computational capabilities of FPGA have been improved recently. Model compression is required to enable ConNN deployment on resource-constrained FPGA devices. Logarithmic quantization is one of the efficient compression methods that can compress a model to very low bit-width without significant deterioration in performance. It is also hardware-friendly by using bitwise operations for multiplication. However, the logarithmic suffers from low resolution at high inputs due to exponential properties. Therefore, we propose a modified logarithmic quantization method with a fine resolution More >