Special Issues
Table of Content

Susceptibility to Adversarial Attacks and Defense in Deep Learning Systems

Submission Deadline: 31 December 2022 (closed) View: 152

Guest Editors

Dr. Steven L. Fernandes, Creighton University, USA.
Prof. Yu-Dong Zhang, University of Leicester, UK.
Dr. João Manuel R. S. Tavares, University of Porto, Portugal. 

Summary

A recent market report has predicted that through 2022, 30% of all cyberattacks against systems powered by deep learning (DL) will leverage training-data poisoning or adversarial attack. Due to strong monetary incentives and associated technological infrastructure, medical image analysis systems have recently been argued to be susceptible to adversarial attacks created from raw data to fool the DL systems such that it assigns the example to the wrong class but which are undetectable to the human eye. 

 

Adversarial attacks are not the only kind of malicious manipulation of input to DL systems that changes their predictions. Adversarial attacks are manipulations that aim to preserve the semantic contents of a given image, e.g., whether it is healthy or diseased, while changing the prediction of the network for the image. 


Besides this attack, images can also be modified to change their content. For example, signs of disease can be removed from a diseased image or added to a healthy image, causing network predictions to change. However, developing these synthetically altered images remains challenging, as it is difficult to guarantee they look realistic and to regulate which image structures are altered. These algorithms can be difficult to train and require huge training datasets. Cybercriminals invest huge amounts of money in motivating and training the hackers to carry out the attack. 


The motivation of this special issue is to solicit the efforts and ongoing research work in the domain of adversarial attacks on deep learning models in medical image analysis. The solution of defence mechanisms will play an important role in assisting researchers in designing firewalls and anti-spam systems. The special issue is keen to receive articles focused on translational research using deep learning necessary to defend the adversarial attack.


Keywords

● Foundations of adversarial deep learning
● Algorithms for attacking with adversarial learning
● Generative Adversarial Networks
● Adversarial Training and Generative Modelling
● Robust feature leakage
● Feature visualizations
● Infrastructural and algorithmic solutions for retroactive identification
● Hypothetical fraudulent illustrations
● Ubiquitous computing against emerging vulnerabilities
● E-health, m-health and e-patient records
● Modelling of vulnerabilities and threats and their evaluation
● IT infrastructure for adversarial attacks
● Protection and detection techniques against black-box, white-box, and gray-box
adversarial attack robustness certification and property verification techniques
● Novel applications of adversarial learning and security
● Defenses against training/testing attacks
● Use of non-robust features for defence

Published Papers


  • Open Access

    ARTICLE

    Text-to-Sketch Synthesis via Adversarial Network

    Jason Elroy Martis, Sannidhan Manjaya Shetty, Manas Ranjan Pradhan, Usha Desai, Biswaranjan Acharya
    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 915-938, 2023, DOI:10.32604/cmc.2023.038847
    (This article belongs to the Special Issue: Susceptibility to Adversarial Attacks and Defense in Deep Learning Systems)
    Abstract In the past, sketches were a standard technique used for recognizing offenders and have remained a valuable tool for law enforcement and social security purposes. However, relying on eyewitness observations can lead to discrepancies in the depictions of the sketch, depending on the experience and skills of the sketch artist. With the emergence of modern technologies such as Generative Adversarial Networks (GANs), generating images using verbal and textual cues is now possible, resulting in more accurate sketch depictions. In this study, we propose an adversarial network that generates human facial sketches using such cues provided More >

  • Open Access

    ARTICLE

    Medical Image Fusion Based on Anisotropic Diffusion and Non-Subsampled Contourlet Transform

    Bhawna Goyal, Ayush Dogra, Rahul Khoond, Dawa Chyophel Lepcha, Vishal Goyal, Steven L. Fernandes
    CMC-Computers, Materials & Continua, Vol.76, No.1, pp. 311-327, 2023, DOI:10.32604/cmc.2023.038398
    (This article belongs to the Special Issue: Susceptibility to Adversarial Attacks and Defense in Deep Learning Systems)
    Abstract The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion. It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disorders. This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion (AD) and non-subsampled contourlet transform (NSCT). First, the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely… More >

Share Link