Special Issues
Table of Content

Privacy-Preserving Technologies for Large-scale Artificial Intelligence

Submission Deadline: 15 October 2024 (closed) View: 378

Guest Editors

Prof. Jing Qiu, Guangzhou University, China
Prof. Xiong Li, University of Electronic Science and Technology of China
A. Prof. Zhe Sun, Guangzhou University, China

Summary


The rapid growth of large-scale artificial intelligence (AI) technologies, particularly fueled by the advent of advanced deep learning models like GPT-4, has ushered in a new era of AI applications with remarkable capabilities. These models, trained on massive amounts of data, have demonstrated exceptional performance in various tasks such as natural language processing, image recognition, and recommendation systems. However, the success of these models has brought to the forefront the critical concerns of privacy and security.

 

As large-scale AI models continue to evolve and play an increasingly prominent role in our lives, it becomes imperative to address the privacy and security challenges they pose. The training process of these models often relies on vast amounts of personal and sensitive data, raising concerns about data breaches, unauthorized access, and potential misuse. Furthermore, deploying and using these models in real-world applications can inadvertently expose private user information, jeopardizing individual privacy rights.


While significant strides have been made in privacy-preserving techniques, there are still notable shortcomings and limitations to be addressed. One of the primary challenges lies in striking a delicate balance between the need to protect user privacy and the desire to leverage large-scale data for training high-performance AI models. Ensuring data privacy without sacrificing the utility and performance of AI systems remains an ongoing challenge.


We seek original research articles, reviews, and survey papers that address the latest developments, challenges, and solutions in this rapidly evolving field. Topics of interest include, but are not limited to:

· Privacy computing theories for large-scale AI models

· Privacy-preserving methods in fine-tuning and pre-training

· Differential privacy techniques for large-scale AI models

· Secure multi-party computation and federated learning

· Homomorphic encryption and secure inference

· Privacy-preserving data aggregation and anonymization

· Trustworthy and transparent AI systems

· Privacy-preserving transfer learning

· Privacy and fairness in large-scale AI applications

· Adversarial machine learning and privacy attacks

· Privacy-enhancing technologies (PETs) for AI in various domains (healthcare, finance, IoT, etc.)

· Regulation and policy considerations for privacy in large-scale AI


We encourage submissions that propose novel methodologies, frameworks, algorithms, and case studies that address the challenges of privacy and security in large-scale AI. Papers exploring techniques for privacy preservation in AI while maintaining utility and performance are of particular interest.


Keywords

Privacy-preserving Technologies, Large-scale Artificial Intelligence, Differential Privacy, Trustworthy AI, Privacy-preserving Transfer Learning

Published Papers


  • Open Access

    ARTICLE

    Privacy-Preserving Large-Scale AI Models for Intelligent Railway Transportation Systems: Hierarchical Poisoning Attacks and Defenses in Federated Learning

    Yongsheng Zhu, Chong Liu, Chunlei Chen, Xiaoting Lyu, Zheng Chen, Bin Wang, Fuqiang Hu, Hanxi Li, Jiao Dai, Baigen Cai, Wei Wang
    CMES-Computer Modeling in Engineering & Sciences, Vol.141, No.2, pp. 1305-1325, 2024, DOI:10.32604/cmes.2024.054820
    (This article belongs to the Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)
    Abstract The development of Intelligent Railway Transportation Systems necessitates incorporating privacy-preserving mechanisms into AI models to protect sensitive information and enhance system efficiency. Federated learning offers a promising solution by allowing multiple clients to train models collaboratively without sharing private data. However, despite its privacy benefits, federated learning systems are vulnerable to poisoning attacks, where adversaries alter local model parameters on compromised clients and send malicious updates to the server, potentially compromising the global model’s accuracy. In this study, we introduce PMM (Perturbation coefficient Multiplied by Maximum value), a new poisoning attack method that perturbs model More >

  • Open Access

    ARTICLE

    Dynamic Hypergraph Modeling and Robustness Analysis for SIoT

    Yue Wan, Nan Jiang, Ziyu Liu
    CMES-Computer Modeling in Engineering & Sciences, Vol.140, No.3, pp. 3017-3034, 2024, DOI:10.32604/cmes.2024.051101
    (This article belongs to the Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)
    Abstract The Social Internet of Things (SIoT) integrates the Internet of Things (IoT) and social networks, taking into account the social attributes of objects and diversifying the relationship between humans and objects, which overcomes the limitations of the IoT’s focus on associations between objects. Artificial Intelligence (AI) technology is rapidly evolving. It is critical to build trustworthy and transparent systems, especially with system security issues coming to the surface. This paper emphasizes the social attributes of objects and uses hypergraphs to model the diverse entities and relationships in SIoT, aiming to build an SIoT hypergraph generation… More >

  • Open Access

    ARTICLE

    2P3FL: A Novel Approach for Privacy Preserving in Financial Sectors Using Flower Federated Learning

    Sandeep Dasari, Rajesh Kaluri
    CMES-Computer Modeling in Engineering & Sciences, Vol.140, No.2, pp. 2035-2051, 2024, DOI:10.32604/cmes.2024.049152
    (This article belongs to the Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)
    Abstract The increasing data pool in finance sectors forces machine learning (ML) to step into new complications. Banking data has significant financial implications and is confidential. Combining users data from several organizations for various banking services may result in various intrusions and privacy leakages. As a result, this study employs federated learning (FL) using a flower paradigm to preserve each organization’s privacy while collaborating to build a robust shared global model. However, diverse data distributions in the collaborative training process might result in inadequate model learning and a lack of privacy. To address this issue, the… More >

    Graphic Abstract

    2P3FL: A Novel Approach for Privacy Preserving in Financial Sectors Using Flower Federated Learning

  • Open Access

    ARTICLE

    A Privacy Preservation Method for Attributed Social Network Based on Negative Representation of Information

    Hao Jiang, Yuerong Liao, Dongdong Zhao, Wenjian Luo, Xingyi Zhang
    CMES-Computer Modeling in Engineering & Sciences, Vol.140, No.1, pp. 1045-1075, 2024, DOI:10.32604/cmes.2024.048653
    (This article belongs to the Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)
    Abstract Due to the presence of a large amount of personal sensitive information in social networks, privacy preservation issues in social networks have attracted the attention of many scholars. Inspired by the self-nonself discrimination paradigm in the biological immune system, the negative representation of information indicates features such as simplicity and efficiency, which is very suitable for preserving social network privacy. Therefore, we suggest a method to preserve the topology privacy and node attribute privacy of attribute social networks, called AttNetNRI. Specifically, a negative survey-based method is developed to disturb the relationship between nodes in the… More >

  • Open Access

    ARTICLE

    Deep Learning Social Network Access Control Model Based on User Preferences

    Fangfang Shan, Fuyang Li, Zhenyu Wang, Peiyu Ji, Mengyi Wang, Huifang Sun
    CMES-Computer Modeling in Engineering & Sciences, Vol.140, No.1, pp. 1029-1044, 2024, DOI:10.32604/cmes.2024.047665
    (This article belongs to the Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)
    Abstract A deep learning access control model based on user preferences is proposed to address the issue of personal privacy leakage in social networks. Firstly, social users and social data entities are extracted from the social network and used to construct homogeneous and heterogeneous graphs. Secondly, a graph neural network model is designed based on user daily social behavior and daily social data to simulate the dissemination and changes of user social preferences and user personal preferences in the social network. Then, high-order neighbor nodes, hidden neighbor nodes, displayed neighbor nodes, and social data nodes are… More >

  • Open Access

    ARTICLE

    KSKV: Key-Strategy for Key-Value Data Collection with Local Differential Privacy

    Dan Zhao, Yang You, Chuanwen Luo, Ting Chen, Yang Liu
    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 3063-3083, 2024, DOI:10.32604/cmes.2023.045400
    (This article belongs to the Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)
    Abstract In recent years, the research field of data collection under local differential privacy (LDP) has expanded its focus from elementary data types to include more complex structural data, such as set-value and graph data. However, our comprehensive review of existing literature reveals that there needs to be more studies that engage with key-value data collection. Such studies would simultaneously collect the frequencies of keys and the mean of values associated with each key. Additionally, the allocation of the privacy budget between the frequencies of keys and the means of values for each key does not… More >

  • Open Access

    ARTICLE

    Privacy-Preserving Federated Deep Learning Diagnostic Method for Multi-Stage Diseases

    Jinbo Yang, Hai Huang, Lailai Yin, Jiaxing Qu, Wanjuan Xie
    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.3, pp. 3085-3099, 2024, DOI:10.32604/cmes.2023.045417
    (This article belongs to the Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)
    Abstract Diagnosing multi-stage diseases typically requires doctors to consider multiple data sources, including clinical symptoms, physical signs, biochemical test results, imaging findings, pathological examination data, and even genetic data. When applying machine learning modeling to predict and diagnose multi-stage diseases, several challenges need to be addressed. Firstly, the model needs to handle multimodal data, as the data used by doctors for diagnosis includes image data, natural language data, and structured data. Secondly, privacy of patients’ data needs to be protected, as these data contain the most sensitive and private information. Lastly, considering the practicality of the… More >

  • Open Access

    ARTICLE

    A Cloud-Fog Enabled and Privacy-Preserving IoT Data Market Platform Based on Blockchain

    Yurong Luo, Wei You, Chao Shang, Xiongpeng Ren, Jin Cao, Hui Li
    CMES-Computer Modeling in Engineering & Sciences, Vol.139, No.2, pp. 2237-2260, 2024, DOI:10.32604/cmes.2023.045679
    (This article belongs to the Special Issue: Privacy-Preserving Technologies for Large-scale Artificial Intelligence)
    Abstract The dynamic landscape of the Internet of Things (IoT) is set to revolutionize the pace of interaction among entities, ushering in a proliferation of applications characterized by heightened quality and diversity. Among the pivotal applications within the realm of IoT, as a significant example, the Smart Grid (SG) evolves into intricate networks of energy deployment marked by data integration. This evolution concurrently entails data interchange with other IoT entities. However, there are also several challenges including data-sharing overheads and the intricate establishment of trusted centers in the IoT ecosystem. In this paper, we introduce a More >

Share Link