Open Access
ARTICLE
Source Camera Identification Algorithm Based on Multi-Scale Feature Fusion
1 School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
2 Shangyu Institute of Science and Engineering, Hangzhou Dianzi University, Shaoxing, 312300, China
3 Key Laboratory of Public Security Information Application Based on Big-Data Architecture, Ministry of Public Security, Zhejiang Police College, Hangzhou, 310000, China
4 Faculty of Artificial Intelligence, Menoufia University, Shebin El-Koom, 32511, Egypt
* Corresponding Author: Mahmoud Emam. Email:
Computers, Materials & Continua 2024, 80(2), 3047-3065. https://doi.org/10.32604/cmc.2024.053680
Received 07 May 2024; Accepted 15 July 2024; Issue published 15 August 2024
Abstract
The widespread availability of digital multimedia data has led to a new challenge in digital forensics. Traditional source camera identification algorithms usually rely on various traces in the capturing process. However, these traces have become increasingly difficult to extract due to wide availability of various image processing algorithms. Convolutional Neural Networks (CNN)-based algorithms have demonstrated good discriminative capabilities for different brands and even different models of camera devices. However, their performances is not ideal in case of distinguishing between individual devices of the same model, because cameras of the same model typically use the same optical lens, image sensor, and image processing algorithms, that result in minimal overall differences. In this paper, we propose a camera forensics algorithm based on multi-scale feature fusion to address these issues. The proposed algorithm extracts different local features from feature maps of different scales and then fuses them to obtain a comprehensive feature representation. This representation is then fed into a subsequent camera fingerprint classification network. Building upon the Swin-T network, we utilize Transformer Blocks and Graph Convolutional Network (GCN) modules to fuse multi-scale features from different stages of the backbone network. Furthermore, we conduct experiments on established datasets to demonstrate the feasibility and effectiveness of the proposed approach.Keywords
Cite This Article
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.