TY - GEN
T1 - Development of a Mimic Robot
T2 - 23rd IEEE International Symposium on Consumer Technologies, ISCT 2019
AU - Hwang, Pin Jui
AU - Hsu, Chen Chien
AU - Wang, Wei Yen
N1 - Funding Information:
ACKNOWLEDGMENT This work was financially supported by the “Chinese Language and Technology Center” of National Taiwan Normal University (NTNU) from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan, and Ministry of Science and Technology, Taiwan, under Grants no. MOST 108-2634-F-003-002 and MOST 108-2634-F-003-003 through Pervasive Artificial Intelligence Research (PAIR) Labs. We are grateful to the National Center for High-performance Computing for computer time and facilities to conduct this research.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/6
Y1 - 2019/6
N2 - With the trends of DIY movements and the maker economy, great needs for the applications of low volume automation (LVA) are predictable. To do so, a learning from demonstration (LfD) problem is addressed in this paper, where robots are taught through demonstrated actions to manipulate a coffee maker. The system is developed using YOLO deep learning architecture to recognize objects such as cups, coffee capsules. Employing a Kinect RGB-D camera, the robot is capable of obtaining the coordinates of the objects and the corresponding moving trajectories. Integrating the above two techniques, the robot is able to recognize the demonstrated actions to establish an action database comprising several sub-actions such as moving a cup, pouring coffee, trigger the coffee machine, etc. Finally, the manipulation of the robot is made by following the order of demonstrated actions. As a result, a vision-based LfD system is established, allowing the robot to learn from human demonstrations and act accordingly.
AB - With the trends of DIY movements and the maker economy, great needs for the applications of low volume automation (LVA) are predictable. To do so, a learning from demonstration (LfD) problem is addressed in this paper, where robots are taught through demonstrated actions to manipulate a coffee maker. The system is developed using YOLO deep learning architecture to recognize objects such as cups, coffee capsules. Employing a Kinect RGB-D camera, the robot is capable of obtaining the coordinates of the objects and the corresponding moving trajectories. Integrating the above two techniques, the robot is able to recognize the demonstrated actions to establish an action database comprising several sub-actions such as moving a cup, pouring coffee, trigger the coffee machine, etc. Finally, the manipulation of the robot is made by following the order of demonstrated actions. As a result, a vision-based LfD system is established, allowing the robot to learn from human demonstrations and act accordingly.
KW - Deep Learning
KW - Learning from Demonstration
KW - Mimic Robot
KW - Yolo
UR - http://www.scopus.com/inward/record.url?scp=85075638723&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85075638723&partnerID=8YFLogxK
U2 - 10.1109/ISCE.2019.8901025
DO - 10.1109/ISCE.2019.8901025
M3 - Conference contribution
AN - SCOPUS:85075638723
T3 - 2019 IEEE 23rd International Symposium on Consumer Technologies, ISCT 2019
SP - 124
EP - 127
BT - 2019 IEEE 23rd International Symposium on Consumer Technologies, ISCT 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 19 June 2019 through 21 June 2019
ER -