Maria Sameen1, Seong Oun Hwang2,*
CMC-Computers, Materials & Continua, Vol.73, No.3, pp. 4559-4576, 2022, DOI:10.32604/cmc.2022.031091
- 28 July 2022
Abstract Machine Learning (ML) systems often involve a re-training process to make better predictions and classifications. This re-training process creates a loophole and poses a security threat for ML systems. Adversaries leverage this loophole and design data poisoning attacks against ML systems. Data poisoning attacks are a type of attack in which an adversary manipulates the training dataset to degrade the ML system’s performance. Data poisoning attacks are challenging to detect, and even more difficult to respond to, particularly in the Internet of Things (IoT) environment. To address this problem, we proposed DISTINÏCT, the first proactive More >