Comparing Deep Reinforcement Learning Algorithms’ Ability to Safely Navigate Challenging Waters

Larsen, Thomas Nakken and Teigen, Halvor Ødegård and Laache, Torkel and Varagnolo, Damiano and Rasheed, Adil (2021) Comparing Deep Reinforcement Learning Algorithms’ Ability to Safely Navigate Challenging Waters. Frontiers in Robotics and AI, 8. ISSN 2296-9144

[thumbnail of pubmed-zip/versions/1/package-entries/frobt-08-738113/frobt-08-738113.pdf] Text
pubmed-zip/versions/1/package-entries/frobt-08-738113/frobt-08-738113.pdf - Published Version

Download (4MB)

Abstract

Reinforcement Learning (RL) controllers have proved to effectively tackle the dual objectives of path following and collision avoidance. However, finding which RL algorithm setup optimally trades off these two tasks is not necessarily easy. This work proposes a methodology to explore this that leverages analyzing the performance and task-specific behavioral characteristics for a range of RL algorithms applied to path-following and collision-avoidance for underactuated surface vehicles in environments of increasing complexity. Compared to the introduced RL algorithms, the results show that the Proximal Policy Optimization (PPO) algorithm exhibits superior robustness to changes in the environment complexity, the reward function, and when generalized to environments with a considerable domain gap from the training environment. Whereas the proposed reward function significantly improves the competing algorithms’ ability to solve the training environment, an unexpected consequence of the dimensionality reduction in the sensor suite, combined with the domain gap, is identified as the source of their impaired generalization performance.

Item Type: Article
Subjects: Science Repository > Mathematical Science
Depositing User: Managing Editor
Date Deposited: 28 Jun 2023 04:08
Last Modified: 13 Oct 2023 03:47
URI: http://research.manuscritpub.com/id/eprint/2543

Actions (login required)

View Item
View Item