Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (6)
  • Open Access

    ARTICLE

    BDPartNet: Feature Decoupling and Reconstruction Fusion Network for Infrared and Visible Image

    Xuejie Wang1, Jianxun Zhang1,*, Ye Tao2, Xiaoli Yuan1, Yifan Guo1

    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 4621-4639, 2024, DOI:10.32604/cmc.2024.051556 - 20 June 2024

    Abstract While single-modal visible light images or infrared images provide limited information, infrared light captures significant thermal radiation data, whereas visible light excels in presenting detailed texture information. Combining images obtained from both modalities allows for leveraging their respective strengths and mitigating individual limitations, resulting in high-quality images with enhanced contrast and rich texture details. Such capabilities hold promising applications in advanced visual tasks including target detection, instance segmentation, military surveillance, pedestrian detection, among others. This paper introduces a novel approach, a dual-branch decomposition fusion network based on AutoEncoder (AE), which decomposes multi-modal features into intensity… More >

  • Open Access

    ARTICLE

    Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding

    Chunming Wu1, Wukai Liu2,*, Xin Ma3

    CMC-Computers, Materials & Continua, Vol.79, No.1, pp. 1441-1461, 2024, DOI:10.32604/cmc.2024.048136 - 25 April 2024

    Abstract A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase the visual impression of fused images by improving the quality of infrared and visible light picture fusion. The network comprises an encoder module, fusion layer, decoder module, and edge improvement module. The encoder module utilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformer to achieve deep-level co-extraction of local and global features from the original picture. An edge enhancement module (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy More >

  • Open Access

    ARTICLE

    Fusion of Infrared and Visible Images Using Fuzzy Based Siamese Convolutional Network

    Kanika Bhalla1, Deepika Koundal2,*, Surbhi Bhatia3, Mohammad Khalid Imam Rahmani4, Muhammad Tahir4

    CMC-Computers, Materials & Continua, Vol.70, No.3, pp. 5503-5518, 2022, DOI:10.32604/cmc.2022.021125 - 11 October 2021

    Abstract Traditional techniques based on image fusion are arduous in integrating complementary or heterogeneous infrared (IR)/visible (VS) images. Dissimilarities in various kind of features in these images are vital to preserve in the single fused image. Hence, simultaneous preservation of both the aspects at the same time is a challenging task. However, most of the existing methods utilize the manual extraction of features; and manual complicated designing of fusion rules resulted in a blurry artifact in the fused image. Therefore, this study has proposed a hybrid algorithm for the integration of multi-features among two heterogeneous images.… More >

  • Open Access

    ARTICLE

    Facial Expression Recognition Based on the Fusion of Infrared and Visible Image

    Jiancheng Zou1, Jiaxin Li1,*, Juncun Wei1, Zhengzheng Li1, Xin Yang2

    Journal on Artificial Intelligence, Vol.3, No.3, pp. 123-134, 2021, DOI:10.32604/jai.2021.027069 - 25 January 2022

    Abstract Facial expression recognition is a research hot spot in the fields of computer vision and pattern recognition. However, the existing facial expression recognition models are mainly concentrated in the visible light environment. They have insufficient generalization ability and low recognition accuracy, and are vulnerable to environmental changes such as illumination and distance. In order to solve these problems, we combine the advantages of the infrared and visible images captured simultaneously by array equipment our developed with two infrared and two visible lens, so that the fused image not only has the texture information of visible… More >

  • Open Access

    ARTICLE

    Infrared and Visible Image Fusion Based on NSST and RDN

    Peizhou Yan1, Jiancheng Zou2,*, Zhengzheng Li1, Xin Yang3

    Intelligent Automation & Soft Computing, Vol.28, No.1, pp. 213-225, 2021, DOI:10.32604/iasc.2021.016201 - 17 March 2021

    Abstract Within the application of driving assistance systems, the detection of driver’s facial features in the cab for a spectrum of luminosities is mission critical. One method that addresses this concern is infrared and visible image fusion. Its purpose is to generate an aggregate image which can granularly and systematically illustrate scene details in a range of lighting conditions. Our study introduces a novel approach to this method with marked improvements. We utilize non-subsampled shearlet transform (NSST) to obtain the low and high frequency sub-bands of infrared and visible imagery. For the low frequency sub-band fusion,… More >

  • Open Access

    ARTICLE

    Intelligent Fusion of Infrared and Visible Image Data Based on Convolutional Sparse Representation and Improved Pulse-Coupled Neural Network

    Jingming Xia1, Yi Lu1, Ling Tan2,*, Ping Jiang3

    CMC-Computers, Materials & Continua, Vol.67, No.1, pp. 613-624, 2021, DOI:10.32604/cmc.2021.013457 - 12 January 2021

    Abstract Multi-source information can be obtained through the fusion of infrared images and visible light images, which have the characteristics of complementary information. However, the existing acquisition methods of fusion images have disadvantages such as blurred edges, low contrast, and loss of details. Based on convolution sparse representation and improved pulse-coupled neural network this paper proposes an image fusion algorithm that decompose the source images into high-frequency and low-frequency subbands by non-subsampled Shearlet Transform (NSST). Furthermore, the low-frequency subbands were fused by convolutional sparse representation (CSR), and the high-frequency subbands were fused by an improved pulse More >

Displaying 1-10 on page 1 of 6. Per Page