Submission Deadline: 31 December 2022 (closed) View: 135
A recent market report has predicted that through 2022, 30% of all cyberattacks against systems powered by deep learning (DL) will leverage training-data poisoning or adversarial attack. Due to strong monetary incentives and associated technological infrastructure, medical image analysis systems have recently been argued to be susceptible to adversarial attacks created from raw data to fool the DL systems such that it assigns the example to the wrong class but which are undetectable to the human eye.
Adversarial attacks are not the only kind of malicious manipulation of input to DL systems that changes their predictions. Adversarial attacks are manipulations that aim to preserve the semantic contents of a given image, e.g., whether it is healthy or diseased, while changing the prediction of the network for the image.
Besides this attack, images can also be modified to change their content. For example, signs of disease can be removed from a diseased image or added to a healthy image, causing network predictions to change. However, developing these synthetically altered images remains challenging, as it is difficult to guarantee they look realistic and to regulate which image structures are altered. These algorithms can be difficult to train and require huge training datasets. Cybercriminals invest huge amounts of money in motivating and training the hackers to carry out the attack.
The motivation of this special issue is to solicit the efforts and ongoing research work in the domain of adversarial attacks on deep learning models in medical image analysis. The solution of defence mechanisms will play an important role in assisting researchers in designing firewalls and anti-spam systems. The special issue is keen to receive articles focused on translational research using deep learning necessary to defend the adversarial attack.