Special Issues
Table of Content

Securing Machine Learning Algorithms

Submission Deadline: 17 June 2024 (closed) View: 189

Guest Editors

Prof. Maode Ma, Qatar University, Qatar
Dr. Mian Muhammad Waseem Iqbal, Sultan Qaboos University, Oman

Summary

Machine learning (ML), which can be defined as the ability for machines to learn from data to solve a task without being explicitly programmed to do so, is currently the most developed and promising subfield of Al for industries and government infrastructures.

The widespread adoption and rapid development of ML algorithms have raised concerns about their security. The security of machine learning algorithms primarily involves two aspects: data security and model security. In terms of data security, it encompasses privacy protection and defense against data tampering. For privacy protection, techniques such as data anonymization, differential privacy, and encryption can be employed to safeguard sensitive data. Regarding defense against data tampering, measures need to be taken to prevent malicious attacks on training data, ensuring the accuracy and trustworthiness of the model. Model security entails attacks on and defenses against the model. Attacks on the model may involve adversarial samples, model reverse engineering, poisoning attacks, and more. To enhance the security of machine learning algorithms, a range of defense mechanisms including adversarial training, model patching, and monitoring need to be implemented to mitigate potential attack risks.


Keywords

-Machine learning algorithm security
-Data privacy
-Adversarial attacks
-Data tampering
-Differential privacy
-Encryption
-Model protection
-Adversarial samples
-Model reverse engineering
-Poisoning attacks
-Defense strategies
-Security measures

Published Papers


  • Open Access

    ARTICLE

    A New Framework for Software Vulnerability Detection Based on an Advanced Computing

    Bui Van Cong, Cho Do Xuan
    CMC-Computers, Materials & Continua, Vol.79, No.3, pp. 3699-3723, 2024, DOI:10.32604/cmc.2024.050019
    (This article belongs to the Special Issue: Securing Machine Learning Algorithms)
    Abstract The detection of software vulnerabilities written in C and C++ languages takes a lot of attention and interest today. This paper proposes a new framework called DrCSE to improve software vulnerability detection. It uses an intelligent computation technique based on the combination of two methods: Rebalancing data and representation learning to analyze and evaluate the code property graph (CPG) of the source code for detecting abnormal behavior of software vulnerabilities. To do that, DrCSE performs a combination of 3 main processing techniques: (i) building the source code feature profiles, (ii) rebalancing data, and (iii) contrastive… More >

Share Link