Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (6)
  • Open Access

    ARTICLE

    Adversarial Defense Technology for Small Infrared Targets

    Tongan Yu1, Yali Xue1,*, Yiming He1, Shan Cui2, Jun Hong2

    CMC-Computers, Materials & Continua, Vol.81, No.1, pp. 1235-1250, 2024, DOI:10.32604/cmc.2024.056075 - 15 October 2024

    Abstract With the rapid development of deep learning-based detection algorithms, deep learning is widely used in the field of infrared small target detection. However, well-designed adversarial samples can fool human visual perception, directly causing a serious decline in the detection quality of the recognition model. In this paper, an adversarial defense technology for small infrared targets is proposed to improve model robustness. The adversarial samples with strong migration can not only improve the generalization of defense technology, but also save the training cost. Therefore, this study adopts the concept of maximizing multidimensional feature distortion, applying noise… More >

  • Open Access

    ARTICLE

    Chained Dual-Generative Adversarial Network: A Generalized Defense Against Adversarial Attacks

    Amitoj Bir Singh1, Lalit Kumar Awasthi1, Urvashi1, Mohammad Shorfuzzaman2, Abdulmajeed Alsufyani2, Mueen Uddin3,*

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 2541-2555, 2023, DOI:10.32604/cmc.2023.032795 - 31 October 2022

    Abstract Neural networks play a significant role in the field of image classification. When an input image is modified by adversarial attacks, the changes are imperceptible to the human eye, but it still leads to misclassification of the images. Researchers have demonstrated these attacks to make production self-driving cars misclassify Stop Road signs as 45 Miles Per Hour (MPH) road signs and a turtle being misclassified as AK47. Three primary types of defense approaches exist which can safeguard against such attacks i.e., Gradient Masking, Robust Optimization, and Adversarial Example Detection. Very few approaches use Generative Adversarial… More >

  • Open Access

    ARTICLE

    An Overview of Adversarial Attacks and Defenses

    Kai Chen*, Jinwei Wang, Jiawei Zhang

    Journal of Information Hiding and Privacy Protection, Vol.4, No.1, pp. 15-24, 2022, DOI:10.32604/jihpp.2022.029006 - 17 June 2022

    Abstract In recent years, machine learning has become more and more popular, especially the continuous development of deep learning technology, which has brought great revolutions to many fields. In tasks such as image classification, natural language processing, information hiding, multimedia synthesis, and so on, the performance of deep learning has far exceeded the traditional algorithms. However, researchers found that although deep learning can train an accurate model through a large amount of data to complete various tasks, the model is vulnerable to the example which is modified artificially. This technology is called adversarial attacks, while the More >

  • Open Access

    ARTICLE

    VANET Jamming and Adversarial Attack Defense for Autonomous Vehicle Safety

    Haeri Kim1, Jong-Moon Chung1,2,*

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 3589-3605, 2022, DOI:10.32604/cmc.2022.023073 - 07 December 2021

    Abstract The development of Vehicular Ad-hoc Network (VANET) technology is helping Intelligent Transportation System (ITS) services to become a reality. Vehicles can use VANETs to communicate safety messages on the road (while driving) and can inform their location and share road condition information in real-time. However, intentional and unintentional (e.g., packet/frame collision) wireless signal jamming can occur, which will degrade the quality of communication over the channel, preventing the reception of safety messages, and thereby posing a safety hazard to the vehicle's passengers. In this paper, VANET jamming detection applying Support Vector Machine (SVM) machine learning… More >

  • Open Access

    ARTICLE

    Deep Image Restoration Model: A Defense Method Against Adversarial Attacks

    Kazim Ali1,*, Adnan N. Qureshi1, Ahmad Alauddin Bin Arifin2, Muhammad Shahid Bhatti3, Abid Sohail3, Rohail Hassan4

    CMC-Computers, Materials & Continua, Vol.71, No.2, pp. 2209-2224, 2022, DOI:10.32604/cmc.2022.020111 - 07 December 2021

    Abstract These days, deep learning and computer vision are much-growing fields in this modern world of information technology. Deep learning algorithms and computer vision have achieved great success in different applications like image classification, speech recognition, self-driving vehicles, disease diagnostics, and many more. Despite success in various applications, it is found that these learning algorithms face severe threats due to adversarial attacks. Adversarial examples are inputs like images in the computer vision field, which are intentionally slightly changed or perturbed. These changes are humanly imperceptible. But are misclassified by a model with high probability and severely More >

  • Open Access

    ARTICLE

    Restoration of Adversarial Examples Using Image Arithmetic Operations

    Kazim Ali*, Adnan N. Quershi

    Intelligent Automation & Soft Computing, Vol.32, No.1, pp. 271-284, 2022, DOI:10.32604/iasc.2022.021296 - 26 October 2021

    Abstract The current development of artificial intelligence is largely based on deep Neural Networks (DNNs). Especially in the computer vision field, DNNs now occur in everything from autonomous vehicles to safety control systems. Convolutional Neural Network (CNN) is based on DNNs mostly used in different computer vision applications, especially for image classification and object detection. The CNN model takes the photos as input and, after training, assigns it a suitable class after setting traceable parameters like weights and biases. CNN is derived from Human Brain's Part Visual Cortex and sometimes performs even better than Haman visual… More >

Displaying 1-10 on page 1 of 6. Per Page