Open Access iconOpen Access

ARTICLE

Artificial Potential Field Incorporated Deep-Q-Network Algorithm for Mobile Robot Path Prediction

by A. Sivaranjani1,*, B. Vinod2

1 Department of Robotics and Automation Engineering, PSG College of Technology, Coimbatore, 641004, India
2 Department of Electrical and Electronics Engineering, PSG College of Technology, Coimbatore, 641004, India

* Corresponding Author: A. Sivaranjani. Email: email

Intelligent Automation & Soft Computing 2023, 35(1), 1135-1150. https://doi.org/10.32604/iasc.2023.028126

Abstract

Autonomous navigation of mobile robots is a challenging task that requires them to travel from their initial position to their destination without collision in an environment. Reinforcement Learning methods enable a state action function in mobile robots suited to their environment. During trial-and-error interaction with its surroundings, it helps a robot to find an ideal behavior on its own. The Deep Q Network (DQN) algorithm is used in TurtleBot 3 (TB3) to achieve the goal by successfully avoiding the obstacles. But it requires a large number of training iterations. This research mainly focuses on a mobility robot’s best path prediction utilizing DQN and the Artificial Potential Field (APF) algorithms. First, a TB3 Waffle Pi DQN is built and trained to reach the goal. Then the APF shortest path algorithm is incorporated into the DQN algorithm. The proposed planning approach is compared with the standard DQN method in a virtual environment based on the Robot Operation System (ROS). The results from the simulation show that the combination is effective for DQN and APF gives a better optimal path and takes less time when compared to the conventional DQN algorithm. The performance improvement rate of the proposed DQN + APF in comparison with DQN in terms of the number of successful targets is attained by 88%. The performance of the proposed DQN + APF in comparison with DQN in terms of average time is achieved by 0.331 s. The performance of the proposed DQN + APF in comparison with DQN average rewards in which the positive goal is attained by 85% and the negative goal is attained by −90%.

Keywords


Cite This Article

APA Style
Sivaranjani, A., Vinod, B. (2023). Artificial potential field incorporated deep-q-network algorithm for mobile robot path prediction. Intelligent Automation & Soft Computing, 35(1), 1135-1150. https://doi.org/10.32604/iasc.2023.028126
Vancouver Style
Sivaranjani A, Vinod B. Artificial potential field incorporated deep-q-network algorithm for mobile robot path prediction. Intell Automat Soft Comput . 2023;35(1):1135-1150 https://doi.org/10.32604/iasc.2023.028126
IEEE Style
A. Sivaranjani and B. Vinod, “Artificial Potential Field Incorporated Deep-Q-Network Algorithm for Mobile Robot Path Prediction,” Intell. Automat. Soft Comput. , vol. 35, no. 1, pp. 1135-1150, 2023. https://doi.org/10.32604/iasc.2023.028126



cc Copyright © 2023 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1881

    View

  • 1251

    Download

  • 0

    Like

Share Link