Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (4)
  • Open Access

    ARTICLE

    Privacy-Preserving Large-Scale AI Models for Intelligent Railway Transportation Systems: Hierarchical Poisoning Attacks and Defenses in Federated Learning

    Yongsheng Zhu1,2,*, Chong Liu3,4, Chunlei Chen5, Xiaoting Lyu3,4, Zheng Chen3,4, Bin Wang6, Fuqiang Hu3,4, Hanxi Li3,4, Jiao Dai3,4, Baigen Cai1, Wei Wang3,4

    CMES-Computer Modeling in Engineering & Sciences, Vol.141, No.2, pp. 1305-1325, 2024, DOI:10.32604/cmes.2024.054820 - 27 September 2024

    Abstract The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency. Federated learning offers a promising solution by allowing multiple clients to train models collaboratively without sharing private data. However, despite its privacy benefits, federated learning systems are vulnerable to poisoning attacks, where adversaries alter local model parameters on compromised clients and send malicious updates to the server, potentially compromising the global model’s accuracy. In this study, we introduce PMM (Perturbation coefficient Multiplied by Maximum value), a new poisoning attack method that perturbs model More >

  • Open Access

    ARTICLE

    Evaluating the Efficacy of Latent Variables in Mitigating Data Poisoning Attacks in the Context of Bayesian Networks: An Empirical Study

    Shahad Alzahrani1, Hatim Alsuwat2, Emad Alsuwat3,*

    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.2, pp. 1635-1654, 2024, DOI:10.32604/cmes.2023.044718 - 29 January 2024

    Abstract Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables. However, the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams. One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks, wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance. In this research paper, we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms. Our framework… More >

  • Open Access

    ARTICLE

    DISTINÏCT: Data poISoning atTacks dectectIon usiNg optÏmized jaCcard disTance

    Maria Sameen1, Seong Oun Hwang2,*

    CMC-Computers, Materials & Continua, Vol.73, No.3, pp. 4559-4576, 2022, DOI:10.32604/cmc.2022.031091 - 28 July 2022

    Abstract Machine Learning (ML) systems often involve a re-training process to make better predictions and classifications. This re-training process creates a loophole and poses a security threat for ML systems. Adversaries leverage this loophole and design data poisoning attacks against ML systems. Data poisoning attacks are a type of attack in which an adversary manipulates the training dataset to degrade the ML system’s performance. Data poisoning attacks are challenging to detect, and even more difficult to respond to, particularly in the Internet of Things (IoT) environment. To address this problem, we proposed DISTINÏCT, the first proactive More >

  • Open Access

    ARTICLE

    Cryptographic Based Secure Model on Dataset for Deep Learning Algorithms

    Muhammad Tayyab1,*, Mohsen Marjani1, N. Z. Jhanjhi1, Ibrahim Abaker Targio Hashim2, Abdulwahab Ali Almazroi3, Abdulaleem Ali Almazroi4

    CMC-Computers, Materials & Continua, Vol.69, No.1, pp. 1183-1200, 2021, DOI:10.32604/cmc.2021.017199 - 04 June 2021

    Abstract Deep learning (DL) algorithms have been widely used in various security applications to enhance the performances of decision-based models. Malicious data added by an attacker can cause several security and privacy problems in the operation of DL models. The two most common active attacks are poisoning and evasion attacks, which can cause various problems, including wrong prediction and misclassification of decision-based models. Therefore, to design an efficient DL model, it is crucial to mitigate these attacks. In this regard, this study proposes a secure neural network (NN) model that provides data security during model training… More >

Displaying 1-10 on page 1 of 4. Per Page