Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (16)
  • Open Access

    ARTICLE

    Local Adaptive Gradient Variance Attack for Deep Fake Fingerprint Detection

    Chengsheng Yuan1,2, Baojie Cui1,2, Zhili Zhou3, Xinting Li4,*, Qingming Jonathan Wu5

    CMC-Computers, Materials & Continua, Vol.78, No.1, pp. 899-914, 2024, DOI:10.32604/cmc.2023.045854 - 30 January 2024

    Abstract In recent years, deep learning has been the mainstream technology for fingerprint liveness detection (FLD) tasks because of its remarkable performance. However, recent studies have shown that these deep fake fingerprint detection (DFFD) models are not resistant to attacks by adversarial examples, which are generated by the introduction of subtle perturbations in the fingerprint image, allowing the model to make fake judgments. Most of the existing adversarial example generation methods are based on gradient optimization, which is easy to fall into local optimal, resulting in poor transferability of adversarial attacks. In addition, the perturbation added… More >

  • Open Access

    ARTICLE

    Evaluating the Efficacy of Latent Variables in Mitigating Data Poisoning Attacks in the Context of Bayesian Networks: An Empirical Study

    Shahad Alzahrani1, Hatim Alsuwat2, Emad Alsuwat3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.2, pp. 1635-1654, 2024, DOI:10.32604/cmes.2023.044718 - 29 January 2024

    Abstract Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables. However, the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams. One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks, wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance. In this research paper, we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms. Our framework… More >

  • Open Access

    ARTICLE

    VeriFace: Defending against Adversarial Attacks in Face Verification Systems

    Awny Sayed1, Sohair Kinlany2, Alaa Zaki2, Ahmed Mahfouz2,3,*

    CMC-Computers, Materials & Continua, Vol.76, No.3, pp. 3151-3166, 2023, DOI:10.32604/cmc.2023.040256 - 08 October 2023

    Abstract Face verification systems are critical in a wide range of applications, such as security systems and biometric authentication. However, these systems are vulnerable to adversarial attacks, which can significantly compromise their accuracy and reliability. Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images. These perturbations can be imperceptible to the human eye but can cause the system to misclassify or fail to recognize the person in the image. To address this issue, we propose a novel system called VeriFace that comprises two defense mechanisms, adversarial detection,… More >

  • Open Access

    ARTICLE

    Medical Image Fusion Based on Anisotropic Diffusion and Non-Subsampled Contourlet Transform

    Bhawna Goyal1,*, Ayush Dogra2, Rahul Khoond1, Dawa Chyophel Lepcha1, Vishal Goyal3, Steven L. Fernandes4

    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 311-327, 2023, DOI:10.32604/cmc.2023.038398 - 08 June 2023

    Abstract The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion. It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disorders. This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion (AD) and non-subsampled contourlet transform (NSCT). First, the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely… More >

  • Open Access

    ARTICLE

    Chained Dual-Generative Adversarial Network: A Generalized Defense Against Adversarial Attacks

    Amitoj Bir Singh1, Lalit Kumar Awasthi1, Urvashi1, Mohammad Shorfuzzaman2, Abdulmajeed Alsufyani2, Mueen Uddin3,*

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 2541-2555, 2023, DOI:10.32604/cmc.2023.032795 - 31 October 2022

    Abstract Neural networks play a significant role in the field of image classification. When an input image is modified by adversarial attacks, the changes are imperceptible to the human eye, but it still leads to misclassification of the images. Researchers have demonstrated these attacks to make production self-driving cars misclassify Stop Road signs as 45 Miles Per Hour (MPH) road signs and a turtle being misclassified as AK47. Three primary types of defense approaches exist which can safeguard against such attacks i.e., Gradient Masking, Robust Optimization, and Adversarial Example Detection. Very few approaches use Generative Adversarial… More >

  • Open Access

    ARTICLE

    Classification of Adversarial Attacks Using Ensemble Clustering Approach

    Pongsakorn Tatongjai1, Tossapon Boongoen2,*, Natthakan Iam-On2, Nitin Naik3, Longzhi Yang4

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 2479-2498, 2023, DOI:10.32604/cmc.2023.024858 - 31 October 2022

    Abstract As more business transactions and information services have been implemented via communication networks, both personal and organization assets encounter a higher risk of attacks. To safeguard these, a perimeter defence like NIDS (network-based intrusion detection system) can be effective for known intrusions. There has been a great deal of attention within the joint community of security and data science to improve machine-learning based NIDS such that it becomes more accurate for adversarial attacks, where obfuscation techniques are applied to disguise patterns of intrusive traffics. The current research focuses on non-payload connections at the TCP (transmission… More >

  • Open Access

    ARTICLE

    Defending Adversarial Examples by a Clipped Residual U-Net Model

    Kazim Ali1,*, Adnan N. Qureshi1, Muhammad Shahid Bhatti2, Abid Sohail2, Mohammad Hijji3

    Intelligent Automation & Soft Computing, Vol.35, No.2, pp. 2237-2256, 2023, DOI:10.32604/iasc.2023.028810 - 19 July 2022

    Abstract Deep learning-based systems have succeeded in many computer vision tasks. However, it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks. These attacks can quickly spoil deep learning models, e.g., different convolutional neural networks (CNNs), used in various computer vision tasks from image classification to object detection. The adversarial examples are carefully designed by injecting a slight perturbation into the clean images. The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense, Generative Adversarial Network Defense, Deep Regret Analytic Generative… More >

  • Open Access

    ARTICLE

    An Optimised Defensive Technique to Recognize Adversarial Iris Images Using Curvelet Transform

    K. Meenakshi1,*, G. Maragatham2

    Intelligent Automation & Soft Computing, Vol.35, No.1, pp. 627-643, 2023, DOI:10.32604/iasc.2023.026961 - 06 June 2022

    Abstract Deep Learning is one of the most popular computer science techniques, with applications in natural language processing, image processing, pattern identification, and various other fields. Despite the success of these deep learning algorithms in multiple scenarios, such as spam detection, malware detection, object detection and tracking, face recognition, and automatic driving, these algorithms and their associated training data are rather vulnerable to numerous security threats. These threats ultimately result in significant performance degradation. Moreover, the supervised based learning models are affected by manipulated data known as adversarial examples, which are images with a particular level… More >

  • Open Access

    ARTICLE

    An Overview of Adversarial Attacks and Defenses

    Kai Chen*, Jinwei Wang, Jiawei Zhang

    Journal of Information Hiding and Privacy Protection, Vol.4, No.1, pp. 15-24, 2022, DOI:10.32604/jihpp.2022.029006 - 17 June 2022

    Abstract In recent years, machine learning has become more and more popular, especially the continuous development of deep learning technology, which has brought great revolutions to many fields. In tasks such as image classification, natural language processing, information hiding, multimedia synthesis, and so on, the performance of deep learning has far exceeded the traditional algorithms. However, researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks, the model is vulnerable to the example which is modified artificially. This technology is called adversarial attacks, while the More >

  • Open Access

    ARTICLE

    A Two Stream Fusion Assisted Deep Learning Framework for Stomach Diseases Classification

    Muhammad Shahid Amin1, Jamal Hussain Shah1, Mussarat Yasmin1, Ghulam Jillani Ansari2, Muhamamd Attique Khan3, Usman Tariq4, Ye Jin Kim5, Byoungchol Chang6,*

    CMC-Computers, Materials & Continua, Vol.73, No.2, pp. 4423-4439, 2022, DOI:10.32604/cmc.2022.030432 - 16 June 2022

    Abstract Due to rapid development in Artificial Intelligence (AI) and Deep Learning (DL), it is difficult to maintain the security and robustness of these techniques and algorithms due to emergence of novel term adversary sampling. Such technique is sensitive to these models. Thus, fake samples cause AI and DL model to produce diverse results. Adversarial attacks that successfully implemented in real world scenarios highlight their applicability even further. In this regard, minor modifications of input images cause “Adversarial Attacks” that altered the performance of competing attacks dramatically. Recently, such attacks and defensive strategies are gaining lot… More >

Displaying 1-10 on page 1 of 16. Per Page