Special lssues

New Advances in Parallel and Distributed Computing for Artificial Intelligence, Machine Learning and Deep Learning

Submission Deadline: 20 December 2022 (closed)

Guest Editors

Dr. A. Shanthini, SRM Institute of Science and Technology, Kattankulathur, India.
Dr. Gunasekaran Manogaran, Universidad Distrital Francisco José de Caldas, Colombia.
Dr. Priyan Malarvizhi Kumar, Kyung Hee University, South Korea.

Summary

The recent technological advancements and cost-effective internet services have drastically increased the number of users and the number of computing tasks to be performed in a parallel or distributed manner. Most of the products and services are one way or the other incorporated with Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL). In order to function effectively and serve their intended purpose, these systems must generate real-time data, process and analyse them using powerful and efficient computational techniques. Parallel and distributed computing are two such powerful and emerging computational techniques that are playing a significant role in the development of computer architectures, networks, and operating systems. They are built upon the following concepts: concurrency, memory manipulation, message-passing, mutual exclusion, and shared-memory models. Research trends in parallel and distributed computing also focus on multiprocessor machines in order to optimise the computational algorithms and other resources with data transmission.

 

Platform-based development using AI, ML, and Dl techniques is also an emerging and interesting area of application for parallel and distributed computing. For instance, devices such as tablets enable the users to make the utmost use of them for certain special tasks. This is differentiated from general-purpose programming with adequate modifications to the hardware, and application programming interfaces (APIs). Machine Learning can potentially extract key insights and information from a data model by approximating the linear and non-linear relationships. In spite of its growing popularity, there exists certain challenges, the need to analyse thousands of models before deciding the best algorithm for our function due to the existence of a large number of hyperparameters and cross-validation models. In such cases, parallel processing can be used to significantly reduce the timeframe involved in this process. Here, the task is divided into small parts each of which are processed parallelly thus improving performance and reducing time, cost, and energy.

 

Also, distributed computing solutions will be the best alternative for large scale futuristic applications given the number of computations involved and to reduce the pressure on standalone workstations. Most of the institutions have adapted to technologies such as high-speed networking, Asynchronous Transfer Mode (ATM), Synchronous Optical Network (SONET), High-Performance Parallel Interface (HiPPI) etc. In order to help institutions, cope up with these technologies and implement them effectively, distributed and parallel computing is used.

 

Topics include, but are not limited to:

 

• Neurocomputing for smart cities using parallel computing and deep learning techniques.

• Development of knowledge graphs and reasoning capabilities using parallel/distributed computing and deep learning.

• Learning and control of autonomous vehicle systems using parallel computing and neural networks-based reinforcement.

• Location inference modelling based on parallel and distributed computing.

• Optimization of distributed computing applications.

• Deep Learning for Human Activity Recognition using parallel computing algorithms.

• Analysis of heterogeneous data using parallel and distributed computing.

• Application of distributed computing in knowledge-based learning algorithms.

• Parallelism in the Internet of Things (IoT) and the Internet of Medical Things (IoMT) systems.

• Soft computing trends in industrial IoT applications


Keywords

Artificial Intelligence; Machine Learning; Deep Learning; Asynchronous Transfer Mode; Synchronous Optical Network

Share Link