Vision-Based Learning from Demonstration System for Robot Arms

Pin Jui Hwang, Chen Chien Hsu*, Po Yung Chou, Wei Yen Wang, Cheng Hung Lin

*此作品的通信作者

研究成果: 雜誌貢獻期刊論文同行評審

摘要

Robotic arms have been widely used in various industries and have the advantages of cost savings, high productivity, and efficiency. Although robotic arms are good at increasing efficiency in repetitive tasks, they still need to be re-programmed and optimized when new tasks are to be deployed, resulting in detrimental downtime and high cost. It is therefore the objective of this paper to present a learning from demonstration (LfD) robotic system to provide a more intuitive way for robots to efficiently perform tasks through learning from human demonstration on the basis of two major components: understanding through human demonstration and reproduction by robot arm. To understand human demonstration, we propose a vision-based spatial-temporal action detection method to detect human actions that focuses on meticulous hand movement in real time to establish an action base. An object trajectory inductive method is then proposed to obtain a key path for objects manipulated by the human through multiple demonstrations. In robot reproduction, we integrate the sequence of actions in the action base and the key path derived by the object trajectory inductive method for motion planning to reproduce the task demonstrated by the human user. Because of the capability of learning from demonstration, the robot can reproduce the tasks that the human demonstrated with the help of vision sensors in unseen contexts.

原文英語
文章編號2678
期刊Sensors
22
發行號7
DOIs
出版狀態已發佈 - 2022 4月 1

ASJC Scopus subject areas

  • 分析化學
  • 資訊系統
  • 原子與分子物理與光學
  • 生物化學
  • 儀器
  • 電氣與電子工程

指紋

深入研究「Vision-Based Learning from Demonstration System for Robot Arms」主題。共同形成了獨特的指紋。

引用此