Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (25)
  • Open Access

    REVIEW

    From Identification to Obfuscation: A Survey of Cross-Network Mapping and Anti-Mapping Methods

    Shaojie Min1, Yaxiao Luo1, Kebing Liu1, Qingyuan Gong2, Yang Chen1,*

    CMC-Computers, Materials & Continua, Vol.86, No.2, pp. 1-23, 2026, DOI:10.32604/cmc.2025.073175 - 09 December 2025

    Abstract User identity linkage (UIL) across online social networks seeks to match accounts belonging to the same real-world individual. This cross-platform mapping enables accurate user modeling but also raises serious privacy risks. Over the past decade, the research community has developed a wide range of UIL methods, from structural embeddings to multimodal fusion architectures. However, corresponding adversarial and defensive approaches remain fragmented and comparatively understudied. In this survey, we provide a unified overview of both mapping and anti-mapping methods for UIL. We categorize representative mapping models by learning paradigm and data modality, and systematically compare them… More >

  • Open Access

    ARTICLE

    Gradient-Guided Assembly Instruction Relocation for Adversarial Attacks Against Binary Code Similarity Detection

    Ran Wei*, Hui Shu

    CMC-Computers, Materials & Continua, Vol.86, No.1, pp. 1-23, 2026, DOI:10.32604/cmc.2025.069562 - 10 November 2025

    Abstract Transformer-based models have significantly advanced binary code similarity detection (BCSD) by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings. Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code, existing techniques predominantly depend on inserting artificial instructions, which incur high computational costs and offer limited diversity of perturbations. To address these limitations, we propose AIMA, a novel gradient-guided assembly instruction relocation method. Our method decouples the detection model into tokenization, embedding, and encoding layers to enable efficient gradient computation. Since token IDs of instructions are… More >

  • Open Access

    ARTICLE

    DSGNN: Dual-Shield Defense for Robust Graph Neural Networks

    Xiaohan Chen1, Yuanfang Chen1,*, Gyu Myoung Lee2, Noel Crespi3, Pierluigi Siano4

    CMC-Computers, Materials & Continua, Vol.85, No.1, pp. 1733-1750, 2025, DOI:10.32604/cmc.2025.067284 - 29 August 2025

    Abstract Graph Neural Networks (GNNs) have demonstrated outstanding capabilities in processing graph-structured data and are increasingly being integrated into large-scale pre-trained models, such as Large Language Models (LLMs), to enhance structural reasoning, knowledge retrieval, and memory management. The expansion of their application scope imposes higher requirements on the robustness of GNNs. However, as GNNs are applied to more dynamic and heterogeneous environments, they become increasingly vulnerable to real-world perturbations. In particular, graph data frequently encounters joint adversarial perturbations that simultaneously affect both structures and features, which are significantly more challenging than isolated attacks. These disruptions, caused… More >

  • Open Access

    ARTICLE

    Mitigating Adversarial Attack through Randomization Techniques and Image Smoothing

    Hyeong-Gyeong Kim1, Sang-Min Choi2, Hyeon Seo2, Suwon Lee2,*

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 4381-4397, 2025, DOI:10.32604/cmc.2025.067024 - 30 July 2025

    Abstract Adversarial attacks pose a significant threat to artificial intelligence systems by exposing them to vulnerabilities in deep learning models. Existing defense mechanisms often suffer drawbacks, such as the need for model retraining, significant inference time overhead, and limited effectiveness against specific attack types. Achieving perfect defense against adversarial attacks remains elusive, emphasizing the importance of mitigation strategies. In this study, we propose a defense mechanism that applies random cropping and Gaussian filtering to input images to mitigate the impact of adversarial attacks. First, the image was randomly cropped to vary its dimensions and then placed… More >

  • Open Access

    ARTICLE

    DEMGAN: A Machine Learning-Based Intrusion Detection System Evasion Scheme

    Dawei Xu1,2,3, Yue Lv1, Min Wang1, Baokun Zheng4,*, Jian Zhao1,3, Jiaxuan Yu5

    CMC-Computers, Materials & Continua, Vol.84, No.1, pp. 1731-1746, 2025, DOI:10.32604/cmc.2025.064833 - 09 June 2025

    Abstract Network intrusion detection systems (IDS) are a prevalent method for safeguarding network traffic against attacks. However, existing IDS primarily depend on machine learning (ML) models, which are vulnerable to evasion through adversarial examples. In recent years, the Wasserstein Generative Adversarial Network (WGAN), based on Wasserstein distance, has been extensively utilized to generate adversarial examples. Nevertheless, several challenges persist: (1) WGAN experiences the mode collapse problem when generating multi-category network traffic data, leading to subpar quality and insufficient diversity in the generated data; (2) Due to unstable training processes, the authenticity of the data produced by… More >

  • Open Access

    ARTICLE

    Improving Security-Sensitive Deep Learning Models through Adversarial Training and Hybrid Defense Mechanisms

    Xuezhi Wen1, Eric Danso2,*, Solomon Danso2

    Journal of Cyber Security, Vol.7, pp. 45-69, 2025, DOI:10.32604/jcs.2025.063606 - 08 May 2025

    Abstract Deep learning models have achieved remarkable success in healthcare, finance, and autonomous systems, yet their security vulnerabilities to adversarial attacks remain a critical challenge. This paper presents a novel dual-phase defense framework that combines progressive adversarial training with dynamic runtime protection to address evolving threats. Our approach introduces three key innovations: multi-stage adversarial training with TRADES (Tradeoff-inspired Adversarial Defense via Surrogate-loss minimization) loss that progressively scales perturbation strength, maintaining 85.10% clean accuracy on CIFAR-10 (Canadian Institute for Advanced Research 10-class dataset) while improving robustness; a hybrid runtime defense integrating feature manipulation, statistical anomaly detection, and… More >

  • Open Access

    ARTICLE

    Enhancing Adversarial Example Transferability via Regularized Constrained Feature Layer

    Xiaoyin Yi1,2, Long Chen1,3,4,*, Jiacheng Huang1, Ning Yu1, Qian Huang5

    CMC-Computers, Materials & Continua, Vol.83, No.1, pp. 157-175, 2025, DOI:10.32604/cmc.2025.059863 - 26 March 2025

    Abstract Transfer-based Adversarial Attacks (TAAs) can deceive a victim model even without prior knowledge. This is achieved by leveraging the property of adversarial examples. That is, when generated from a surrogate model, they retain their features if applied to other models due to their good transferability. However, adversarial examples often exhibit overfitting, as they are tailored to exploit the particular architecture and feature representation of source models. Consequently, when attempting black-box transfer attacks on different target models, their effectiveness is decreased. To solve this problem, this study proposes an approach based on a Regularized Constrained Feature More >

  • Open Access

    ARTICLE

    Practical Adversarial Attacks Imperceptible to Humans in Visual Recognition

    Donghyeok Park1, Sumin Yeon2, Hyeon Seo2, Seok-Jun Buu2, Suwon Lee2,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.142, No.3, pp. 2725-2737, 2025, DOI:10.32604/cmes.2025.061732 - 03 March 2025

    Abstract Recent research on adversarial attacks has primarily focused on white-box attack techniques, with limited exploration of black-box attack methods. Furthermore, in many black-box research scenarios, it is assumed that the output label and probability distribution can be observed without imposing any constraints on the number of attack attempts. Unfortunately, this disregard for the real-world practicality of attacks, particularly their potential for human detectability, has left a gap in the research landscape. Considering these limitations, our study focuses on using a similar color attack method, assuming access only to the output label, limiting the number of More >

  • Open Access

    ARTICLE

    Secure Channel Estimation Using Norm Estimation Model for 5G Next Generation Wireless Networks

    Khalil Ullah1,*, Song Jian1, Muhammad Naeem Ul Hassan1, Suliman Khan2, Mohammad Babar3,*, Arshad Ahmad4, Shafiq Ahmad5

    CMC-Computers, Materials & Continua, Vol.82, No.1, pp. 1151-1169, 2025, DOI:10.32604/cmc.2024.057328 - 03 January 2025

    Abstract The emergence of next generation networks (NextG), including 5G and beyond, is reshaping the technological landscape of cellular and mobile networks. These networks are sufficiently scaled to interconnect billions of users and devices. Researchers in academia and industry are focusing on technological advancements to achieve high-speed transmission, cell planning, and latency reduction to facilitate emerging applications such as virtual reality, the metaverse, smart cities, smart health, and autonomous vehicles. NextG continuously improves its network functionality to support these applications. Multiple input multiple output (MIMO) technology offers spectral efficiency, dependability, and overall performance in conjunction with More >

  • Open Access

    ARTICLE

    Local Adaptive Gradient Variance Attack for Deep Fake Fingerprint Detection

    Chengsheng Yuan1,2, Baojie Cui1,2, Zhili Zhou3, Xinting Li4,*, Qingming Jonathan Wu5

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 899-914, 2024, DOI:10.32604/cmc.2023.045854 - 30 January 2024

    Abstract In recent years, deep learning has been the mainstream technology for fingerprint liveness detection (FLD) tasks because of its remarkable performance. However, recent studies have shown that these deep fake fingerprint detection (DFFD) models are not resistant to attacks by adversarial examples, which are generated by the introduction of subtle perturbations in the fingerprint image, allowing the model to make fake judgments. Most of the existing adversarial example generation methods are based on gradient optimization, which is easy to fall into local optimal, resulting in poor transferability of adversarial attacks. In addition, the perturbation added… More >

Displaying 1-10 on page 1 of 25. Per Page