Submission Deadline: 15 October 2024 (closed) View: 378
The rapid growth of large-scale artificial intelligence (AI) technologies, particularly fueled by the advent of advanced deep learning models like GPT-4, has ushered in a new era of AI applications with remarkable capabilities. These models, trained on massive amounts of data, have demonstrated exceptional performance in various tasks such as natural language processing, image recognition, and recommendation systems. However, the success of these models has brought to the forefront the critical concerns of privacy and security.
As large-scale AI models continue to evolve and play an increasingly prominent role in our lives, it becomes imperative to address the privacy and security challenges they pose. The training process of these models often relies on vast amounts of personal and sensitive data, raising concerns about data breaches, unauthorized access, and potential misuse. Furthermore, deploying and using these models in real-world applications can inadvertently expose private user information, jeopardizing individual privacy rights.
While significant strides have been made in privacy-preserving techniques, there are still notable shortcomings and limitations to be addressed. One of the primary challenges lies in striking a delicate balance between the need to protect user privacy and the desire to leverage large-scale data for training high-performance AI models. Ensuring data privacy without sacrificing the utility and performance of AI systems remains an ongoing challenge.
We seek original research articles, reviews, and survey papers that address the latest developments, challenges, and solutions in this rapidly evolving field. Topics of interest include, but are not limited to:
· Privacy computing theories for large-scale AI models
· Privacy-preserving methods in fine-tuning and pre-training
· Differential privacy techniques for large-scale AI models
· Secure multi-party computation and federated learning
· Homomorphic encryption and secure inference
· Privacy-preserving data aggregation and anonymization
· Trustworthy and transparent AI systems
· Privacy-preserving transfer learning
· Privacy and fairness in large-scale AI applications
· Adversarial machine learning and privacy attacks
· Privacy-enhancing technologies (PETs) for AI in various domains (healthcare, finance, IoT, etc.)
· Regulation and policy considerations for privacy in large-scale AI
We encourage submissions that propose novel methodologies, frameworks, algorithms, and case studies that address the challenges of privacy and security in large-scale AI. Papers exploring techniques for privacy preservation in AI while maintaining utility and performance are of particular interest.