TY - JOUR
T1 - A Comprehensive Review of Mobile Robot Navigation Using Deep Reinforcement Learning Algorithms in Crowded Environments
AU - Le, Hoangcong
AU - Saeedvand, Saeed
AU - Hsu, Chen Chien
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/12
Y1 - 2024/12
N2 - Navigation is a crucial challenge for mobile robots. Currently, deep reinforcement learning has attracted considerable attention and has witnessed substantial development owing to its robust performance and learning capabilities in real-world scenarios. Scientists leverage the advantages of deep neural networks, such as long short-term memory, recurrent neural networks, and convolutional neural networks, to integrate them into mobile robot navigation based on deep reinforcement learning. This integration aims to enhance the robot's motion control performance in both static and dynamic environments. This paper illustrates a comprehensive survey of deep reinforcement learning methods applied to mobile robot navigation systems in crowded environments, exploring various navigation frameworks based on deep reinforcement learning and their benefits over traditional simultaneous localization and mapping-based frameworks. Subsequently, we comprehensively compare and analyze the relationships and differences among three types of navigation: autonomous-based navigation, navigation based on simultaneous localization and mapping, and planning-based navigation. Moreover, the crowded environment includes static, dynamic, and a combination of obstacles in different typical application scenarios. Finally, we offer insights into the evolution of navigation based on deep reinforcement learning, addressing the problems and providing potential solutions associated with this emerging field.
AB - Navigation is a crucial challenge for mobile robots. Currently, deep reinforcement learning has attracted considerable attention and has witnessed substantial development owing to its robust performance and learning capabilities in real-world scenarios. Scientists leverage the advantages of deep neural networks, such as long short-term memory, recurrent neural networks, and convolutional neural networks, to integrate them into mobile robot navigation based on deep reinforcement learning. This integration aims to enhance the robot's motion control performance in both static and dynamic environments. This paper illustrates a comprehensive survey of deep reinforcement learning methods applied to mobile robot navigation systems in crowded environments, exploring various navigation frameworks based on deep reinforcement learning and their benefits over traditional simultaneous localization and mapping-based frameworks. Subsequently, we comprehensively compare and analyze the relationships and differences among three types of navigation: autonomous-based navigation, navigation based on simultaneous localization and mapping, and planning-based navigation. Moreover, the crowded environment includes static, dynamic, and a combination of obstacles in different typical application scenarios. Finally, we offer insights into the evolution of navigation based on deep reinforcement learning, addressing the problems and providing potential solutions associated with this emerging field.
KW - Crowded Environment
KW - Deep reinforcement learning
KW - Mobile robot navigation
KW - Types of navigation
UR - http://www.scopus.com/inward/record.url?scp=85209697880&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85209697880&partnerID=8YFLogxK
U2 - 10.1007/s10846-024-02198-w
DO - 10.1007/s10846-024-02198-w
M3 - Article
AN - SCOPUS:85209697880
SN - 0921-0296
VL - 110
JO - Journal of Intelligent and Robotic Systems: Theory and Applications
JF - Journal of Intelligent and Robotic Systems: Theory and Applications
IS - 4
M1 - 158
ER -