Submission Deadline: 20 December 2023 (closed) View: 245
With the rapidly development of 5G communication, edge device and artificial intelligence (AI) technology, it is a very meaningful thing to process and mine the value of big data by using AI technology. It has applied to multiple fields such as intelligent driving, recommendation system, scientific computing, and Smart Ocean. With the widespread application of AI technology, the scale of AI model is getting larger and larger, and the parameters are increasing exponentially, such as GPT-3, Huawei Pangu model, Enlightenment, etc. And the datasets are also getting larger and larger, such as ImageNet-1K, Google Open Images and Tencent ML-Images, etc.
While the development of AI technology and big data bring opportunities, it also brings some challenges. Such as, how to conduct distributed high-performance training and inference? How to protect data privacy when training AI model? How to store and read AI training data efficiently, and so on. Therefore, distributed training and inference system or framework, data privacy, data processing and storage, and algorithms for AI should be researched in depth. However, it is currently in its early stage for research on the next generation large-scale AI and needs a special communication for the recent advances of the next generation large-scale AI.
The focus for this special issue is on advances in Distributed ML. Researchers from academic fields and industries worldwide are encouraged to submit high quality unpublished original research articles as well as review articles in broad areas relevant to theories, technologies, and emerging applications.
Topics:
• Cloud and edge distributed training framework and algorithm
• Security and privacy issues for edge AI
• Federated Learning framework
• Federated Learning algorithm
• Data processing algorithm for AI
• Distributed storage system, algorithm for AI
• Distributed machine learning algorithm and application
• Distributed ML on programmability, representations of parallelisms, performance optimizations, and system architectures
• Resource management and scheduling for AI
• Scaling and accelerating machine learning, deep learning, and computer vision applications.
• Big data and machine learning techniques for distributed and parallel systems
• Fault tolerance, reliability, and availability
• Datacenter, HPC, cloud, serverless, and edge/IoT computing platforms