The goal of this project is to create novel algorithms and technologies to support highly dynamic and high accuracy mobile manipulation tasks. The proposed research uses driving an e-scooter as a benchmark application. Controlling an e-scooter is an interesting challenge since it requires both highly dynamic and highly accurate motions. Another distinguishing feature of this research is that we use an unmodified humanoid robot and an unmodified e-scooter. For example, instead of using a balancer or controlling the throttle via the OBD bus, we use the arms and grippers of the robot to steer the e-scooter, to turn the throttle to control the velocity, and the brake lever to break the robot. Current state of the art approaches in robotics fails at this task. Since many parameters (e.g., backlash in the gears, stiffness of the steering column), are unknown, I propose a reinforcement learning based approach. I have previously demonstrated the usefulness of reinforcement learning in learning motion plans to manipulate simple planar objects. However, standard DRL approaches are difficult to apply in complex multi-dimensional scenarios. In this research, I propose an extension to the soft actor critic approach. Soft actor critic approaches include a bonus term for policies with high entropy to encourage exploration and prevent premature convergence during training. I extend the soft actor critic model to improve performance on hybrid state spaces, that is state spaces that include continuous as well as discrete actions. We will demonstrate the effectiveness of our research during international robot competitions as well as several field trials targeted at Taiwan sports and industry. The final goal of this project is to have our robot and e-scooter pass the Taiwanese scooter license test. This would be an important achievement in intelligent robotics.
|Effective start/end date||2021/08/01 → 2022/07/31|