Open Access iconOpen Access

ARTICLE

crossmark

Implementation of Strangely Behaving Intelligent Agents to Determine Human Intervention During Reinforcement Learning

Christopher C. Rosser, Wilbur L. Walters, Abdulghani M. Abdulghani, Mokhles M. Abdulghani, Khalid H. Abed*

Department of Electrical & Computer Engineering and Computer Science, Jackson State University (JSU), Jackson, 39217, MS, USA

* Corresponding Author: Khalid H. Abed. Email: email

Journal on Artificial Intelligence 2022, 4(4), 261-277. https://doi.org/10.32604/jai.2022.039703

Abstract

Intrinsic motivation helps autonomous exploring agents traverse a larger portion of their environments. However, simulations of different learning environments in previous research show that after millions of timesteps of successful training, an intrinsically motivated agent may learn to act in ways unintended by the designer. This potential for unintended actions of autonomous exploring agents poses threats to the environment and humans if operated in the real world. We investigated this topic by using Unity’s Machine Learning Agent Toolkit (ML-Agents) implementation of the Proximal Policy Optimization (PPO) algorithm with the Intrinsic Curiosity Module (ICM) to train autonomous exploring agents in three learning environments. We demonstrate that ICM, although designed to assist agent navigation in environments with sparse reward generation, increasing gradually as a tool for purposely training misbehaving agent in significantly less than 1 million timesteps. We present the following achievements: 1) experiments designed to cause agents to act undesirably, 2) a metric for gauging how well an agent achieves its goal without collisions, and 3) validation of PPO best practices. Then, we used optimized methods to improve the agent’s performance and reduce collisions within the same environments. These achievements help further our understanding of the significance of monitoring training statistics during reinforcement learning for determining how humans can intervene to improve agent safety and performance.

Keywords


Cite This Article

APA Style
Rosser, C.C., Walters, W.L., Abdulghani, A.M., Abdulghani, M.M., Abed, K.H. (2022). Implementation of strangely behaving intelligent agents to determine human intervention during reinforcement learning. Journal on Artificial Intelligence, 4(4), 261-277. https://doi.org/10.32604/jai.2022.039703
Vancouver Style
Rosser CC, Walters WL, Abdulghani AM, Abdulghani MM, Abed KH. Implementation of strangely behaving intelligent agents to determine human intervention during reinforcement learning. J Artif Intell . 2022;4(4):261-277 https://doi.org/10.32604/jai.2022.039703
IEEE Style
C.C. Rosser, W.L. Walters, A.M. Abdulghani, M.M. Abdulghani, and K.H. Abed, “Implementation of Strangely Behaving Intelligent Agents to Determine Human Intervention During Reinforcement Learning,” J. Artif. Intell. , vol. 4, no. 4, pp. 261-277, 2022. https://doi.org/10.32604/jai.2022.039703



cc Copyright © 2022 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 728

    View

  • 440

    Download

  • 0

    Like

Share Link