Open Access iconOpen Access

ARTICLE

crossmark

Deep Reinforcement Learning for Addressing Disruptions in Traffic Light Control

by Faizan Rasheed1, Kok-Lim Alvin Yau2, Rafidah Md Noor3, Yung-Wey Chong4,*

1 School of Engineering and Computer Science, University of Hertfordshire, Hatfield, AL109AB, UK
2 Department of Computing and Information Systems, Sunway University, Subang Jaya, 47500, Malaysia
3 Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, 50603, Malaysia
4 National Advanced IPv6 Centre, Universiti Sains Malaysia, USM, Penang, 11800, Malaysia

* Corresponding Author: Yung-Wey Chong. Email: email

(This article belongs to the Special Issue: Artificial Intelligence Enabled Intelligent Transportation Systems)

Computers, Materials & Continua 2022, 71(2), 2225-2247. https://doi.org/10.32604/cmc.2022.022952

Abstract

This paper investigates the use of multi-agent deep Q-network (MADQN) to address the curse of dimensionality issue occurred in the traditional multi-agent reinforcement learning (MARL) approach. The proposed MADQN is applied to traffic light controllers at multiple intersections with busy traffic and traffic disruptions, particularly rainfall. MADQN is based on deep Q-network (DQN), which is an integration of the traditional reinforcement learning (RL) and the newly emerging deep learning (DL) approaches. MADQN enables traffic light controllers to learn, exchange knowledge with neighboring agents, and select optimal joint actions in a collaborative manner. A case study based on a real traffic network is conducted as part of a sustainable urban city project in the Sunway City of Kuala Lumpur in Malaysia. Investigation is also performed using a grid traffic network (GTN) to understand that the proposed scheme is effective in a traditional traffic network. Our proposed scheme is evaluated using two simulation tools, namely Matlab and Simulation of Urban Mobility (SUMO). Our proposed scheme has shown that the cumulative delay of vehicles can be reduced by up to 30% in the simulations.

Keywords


Cite This Article

APA Style
Rasheed, F., Yau, K.A., Noor, R.M., Chong, Y. (2022). Deep reinforcement learning for addressing disruptions in traffic light control. Computers, Materials & Continua, 71(2), 2225-2247. https://doi.org/10.32604/cmc.2022.022952
Vancouver Style
Rasheed F, Yau KA, Noor RM, Chong Y. Deep reinforcement learning for addressing disruptions in traffic light control. Comput Mater Contin. 2022;71(2):2225-2247 https://doi.org/10.32604/cmc.2022.022952
IEEE Style
F. Rasheed, K. A. Yau, R. M. Noor, and Y. Chong, “Deep Reinforcement Learning for Addressing Disruptions in Traffic Light Control,” Comput. Mater. Contin., vol. 71, no. 2, pp. 2225-2247, 2022. https://doi.org/10.32604/cmc.2022.022952



cc Copyright © 2022 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 2762

    View

  • 1582

    Download

  • 0

    Like

Share Link