TY - GEN
T1 - Development of a Mimic Robot
T2 - 23rd IEEE International Symposium on Consumer Technologies, ISCT 2019
AU - Hwang, Pin Jui
AU - Hsu, Chen Chien
AU - Wang, Wei Yen
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/6
Y1 - 2019/6
N2 - With the trends of DIY movements and the maker economy, great needs for the applications of low volume automation (LVA) are predictable. To do so, a learning from demonstration (LfD) problem is addressed in this paper, where robots are taught through demonstrated actions to manipulate a coffee maker. The system is developed using YOLO deep learning architecture to recognize objects such as cups, coffee capsules. Employing a Kinect RGB-D camera, the robot is capable of obtaining the coordinates of the objects and the corresponding moving trajectories. Integrating the above two techniques, the robot is able to recognize the demonstrated actions to establish an action database comprising several sub-actions such as moving a cup, pouring coffee, trigger the coffee machine, etc. Finally, the manipulation of the robot is made by following the order of demonstrated actions. As a result, a vision-based LfD system is established, allowing the robot to learn from human demonstrations and act accordingly.
AB - With the trends of DIY movements and the maker economy, great needs for the applications of low volume automation (LVA) are predictable. To do so, a learning from demonstration (LfD) problem is addressed in this paper, where robots are taught through demonstrated actions to manipulate a coffee maker. The system is developed using YOLO deep learning architecture to recognize objects such as cups, coffee capsules. Employing a Kinect RGB-D camera, the robot is capable of obtaining the coordinates of the objects and the corresponding moving trajectories. Integrating the above two techniques, the robot is able to recognize the demonstrated actions to establish an action database comprising several sub-actions such as moving a cup, pouring coffee, trigger the coffee machine, etc. Finally, the manipulation of the robot is made by following the order of demonstrated actions. As a result, a vision-based LfD system is established, allowing the robot to learn from human demonstrations and act accordingly.
KW - Deep Learning
KW - Learning from Demonstration
KW - Mimic Robot
KW - Yolo
UR - http://www.scopus.com/inward/record.url?scp=85075638723&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85075638723&partnerID=8YFLogxK
U2 - 10.1109/ISCE.2019.8901025
DO - 10.1109/ISCE.2019.8901025
M3 - Conference contribution
AN - SCOPUS:85075638723
T3 - 2019 IEEE 23rd International Symposium on Consumer Technologies, ISCT 2019
SP - 124
EP - 127
BT - 2019 IEEE 23rd International Symposium on Consumer Technologies, ISCT 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 19 June 2019 through 21 June 2019
ER -