TY - GEN
T1 - Push recovery and active balancing for inexpensive humanoid robots using rl and drl
AU - Hosseinmemar, Amirhossein
AU - Anderson, John
AU - Baltes, Jacky
AU - Lau, Meng Cheng
AU - Wang, Ziang
N1 - Publisher Copyright:
© Springer Nature Switzerland AG 2020.
PY - 2020
Y1 - 2020
N2 - Push recovery of a humanoid robot is a challenging task because of many different levels of control and behaviour, from walking gait to dynamic balancing. This research focuses on the active balancing and push recovery problems that allow inexpensive humanoid robots to balance while standing and walking, and to compensate for external forces. In this research, we have proposed a push recovery mechanism that employs two machine learning techniques, Reinforcement Learning and Deep Reinforcement Learning, to learn recovery step trajectories during push recovery using a closed-loop feedback control. We have implemented a 3D model using the Robot Operating System and Gazebo. To reduce wear and tear on the real robot, we used this model for learning the recovery steps for different impact strengths and directions. We evaluated our approach in both in the real world and in simulation. All the real world experiments are performed by Polaris, a teen-sized humanoid robot.
AB - Push recovery of a humanoid robot is a challenging task because of many different levels of control and behaviour, from walking gait to dynamic balancing. This research focuses on the active balancing and push recovery problems that allow inexpensive humanoid robots to balance while standing and walking, and to compensate for external forces. In this research, we have proposed a push recovery mechanism that employs two machine learning techniques, Reinforcement Learning and Deep Reinforcement Learning, to learn recovery step trajectories during push recovery using a closed-loop feedback control. We have implemented a 3D model using the Robot Operating System and Gazebo. To reduce wear and tear on the real robot, we used this model for learning the recovery steps for different impact strengths and directions. We evaluated our approach in both in the real world and in simulation. All the real world experiments are performed by Polaris, a teen-sized humanoid robot.
KW - Active balancing
KW - Deep reinforcement learning
KW - Inexpensive humanoid robots
KW - Push recovery
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85091283816&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85091283816&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-55789-8_6
DO - 10.1007/978-3-030-55789-8_6
M3 - Conference contribution
AN - SCOPUS:85091283816
SN - 9783030557881
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 63
EP - 74
BT - Trends in Artificial Intelligence Theory and Applications. Artificial Intelligence Practices - 33rd International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2020, Proceedings
A2 - Fujita, Hamido
A2 - Sasaki, Jun
A2 - Fournier-Viger, Philippe
A2 - Ali, Moonis
PB - Springer Science and Business Media Deutschland GmbH
T2 - 33rd International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2020
Y2 - 22 September 2020 through 25 September 2020
ER -