Zhicong Qiu1,2, Xianmin Wang1,*, Huawei Ma1, Songcao Hou1, Jing Li1,2,*, Zuoyong Li2
Intelligent Automation & Soft Computing, Vol.37, No.2, pp. 1243-1256, 2023, DOI:10.32604/iasc.2023.038241
- 21 June 2023
Abstract Reweighting adversarial examples during training plays an essential role in improving the robustness of neural networks, which lies in the fact that examples closer to the decision boundaries are much more vulnerable to being attacked and should be given larger weights. The probability margin (PM) method is a promising approach to continuously and path-independently measuring such closeness between the example and decision boundary. However, the performance of PM is limited due to the fact that PM fails to effectively distinguish the examples having only one misclassified category and the ones with multiple misclassified categories, where… More >