Open Access
ARTICLE
Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding
1 Key Laboratory of Modern Power System Simulation and Control & Renewable Energy Technology, School of Electrical Engineering, Northeast Electric Power University, Jilin, 132012, China
2 School of Electrical Engineering, Northeast Electric Power University, Jilin, 132012, China
3 School of Aeronautical Engineering, Jilin Institute of Chemical Technology, Jilin, 132022, China
* Corresponding Author: Wukai Liu. Email:
(This article belongs to the Special Issue: Machine Vision Detection and Intelligent Recognition)
Computers, Materials & Continua 2024, 79(1), 1441-1461. https://doi.org/10.32604/cmc.2024.048136
Received 28 November 2023; Accepted 08 March 2024; Issue published 25 April 2024
Abstract
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase the visual impression of fused images by improving the quality of infrared and visible light picture fusion. The network comprises an encoder module, fusion layer, decoder module, and edge improvement module. The encoder module utilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformer to achieve deep-level co-extraction of local and global features from the original picture. An edge enhancement module (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy is introduced to enhance the adaptive representation of information in various regions of the source image, thereby enhancing the contrast of the fused image. The encoder and the EEM module extract features, which are then combined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test the algorithm proposed in this paper. The results of the experiments demonstrate that the network effectively preserves background and detail information in both infrared and visible images, yielding superior outcomes in subjective and objective evaluations.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.