Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    A Gaussian Noise-Based Algorithm for Enhancing Backdoor Attacks

    Hong Huang, Yunfei Wang*, Guotao Yuan, Xin Li

    CMC-Computers, Materials & Continua, Vol.80, No.1, pp. 361-387, 2024, DOI:10.32604/cmc.2024.051633 - 18 July 2024

    Abstract Deep Neural Networks (DNNs) are integral to various aspects of modern life, enhancing work efficiency. Nonetheless, their susceptibility to diverse attack methods, including backdoor attacks, raises security concerns. We aim to investigate backdoor attack methods for image categorization tasks, to promote the development of DNN towards higher security. Research on backdoor attacks currently faces significant challenges due to the distinct and abnormal data patterns of malicious samples, and the meticulous data screening by developers, hindering practical attack implementation. To overcome these challenges, this study proposes a Gaussian Noise-Targeted Universal Adversarial Perturbation (GN-TUAP) algorithm. This approach… More >

  • Open Access

    ARTICLE

    A Degradation Type Adaptive and Deep CNN-Based Image Classification Model for Degraded Images

    Huanhua Liu, Wei Wang*, Hanyu Liu, Shuheng Yi, Yonghao Yu, Xunwen Yao

    CMES-Computer Modeling in Engineering & Sciences, Vol.138, No.1, pp. 459-472, 2024, DOI:10.32604/cmes.2023.029084 - 22 September 2023

    Abstract Deep Convolutional Neural Networks (CNNs) have achieved high accuracy in image classification tasks, however, most existing models are trained on high-quality images that are not subject to image degradation. In practice, images are often affected by various types of degradation which can significantly impact the performance of CNNs. In this work, we investigate the influence of image degradation on three typical image classification CNNs and propose a Degradation Type Adaptive Image Classification Model (DTA-ICM) to improve the existing CNNs’ classification accuracy on degraded images. The proposed DTA-ICM comprises two key components: a Degradation Type Predictor… More >

  • Open Access

    ARTICLE

    Intelligent Beetle Antenna Search with Deep Transfer Learning Enabled Medical Image Classification Model

    Mohamed Ibrahim Waly*

    Computer Systems Science and Engineering, Vol.46, No.3, pp. 3159-3174, 2023, DOI:10.32604/csse.2023.035900 - 03 April 2023

    Abstract Recently, computer assisted diagnosis (CAD) model creation has become more dependent on medical picture categorization. It is often used to identify several conditions, including brain disorders, diabetic retinopathy, and skin cancer. Most traditional CAD methods relied on textures, colours, and forms. Because many models are issue-oriented, they need a more substantial capacity to generalize and cannot capture high-level problem domain notions. Recent deep learning (DL) models have been published, providing a practical way to develop models specifically for classifying input medical pictures. This paper offers an intelligent beetle antenna search (IBAS-DTL) method for classifying medical… More >

  • Open Access

    ARTICLE

    Chained Dual-Generative Adversarial Network: A Generalized Defense Against Adversarial Attacks

    Amitoj Bir Singh1, Lalit Kumar Awasthi1, Urvashi1, Mohammad Shorfuzzaman2, Abdulmajeed Alsufyani2, Mueen Uddin3,*

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 2541-2555, 2023, DOI:10.32604/cmc.2023.032795 - 31 October 2022

    Abstract Neural networks play a significant role in the field of image classification. When an input image is modified by adversarial attacks, the changes are imperceptible to the human eye, but it still leads to misclassification of the images. Researchers have demonstrated these attacks to make production self-driving cars misclassify Stop Road signs as 45 Miles Per Hour (MPH) road signs and a turtle being misclassified as AK47. Three primary types of defense approaches exist which can safeguard against such attacks i.e., Gradient Masking, Robust Optimization, and Adversarial Example Detection. Very few approaches use Generative Adversarial… More >

Displaying 1-10 on page 1 of 4. Per Page