Toward the flexible automation for robot learning from human demonstration using multimodal perception approach

Jing Hao Chen, Guan Yi Lu, Yi Hsing Chien, Hsin Han Chiang, Wei Yen Wang, Chen Chien Hsu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This study proposes a multi-modal perception approach to make a robotic arm perform flexible automation and further simplify the complicated coding process of controlling a robotic arm. The depth camera is utilized for detect face and hand gesture for recognizing operator identification and commands. In addition, the kinematics of the robotic arm associated with the position of manipulated objects can be derived based on the information through human demonstrations and detected objects. In the experiments, the proposed multi-modal perception system can firstly recognize the operator. Then, the operator can demonstrate a task to generate the learning data with the assistance of using gesture. Afterward, the robotic arm can perform the same task as human demonstration. During the process of imitating task, the robotic arm can also be guided by the gesture command of operator.

Original languageEnglish
Title of host publicationProceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages148-153
Number of pages6
ISBN (Electronic)9781728105253
DOIs
Publication statusPublished - 2019 Jul
Event2019 International Conference on System Science and Engineering, ICSSE 2019 - Dong Hoi City, Quang Binh Province, Viet Nam
Duration: 2019 Jul 202019 Jul 21

Publication series

NameProceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019

Conference

Conference2019 International Conference on System Science and Engineering, ICSSE 2019
CountryViet Nam
CityDong Hoi City, Quang Binh Province
Period19/7/2019/7/21

Fingerprint

Robot learning
Robotic arms
Automation
Robotics
Demonstrations
Robot
Gesture
Operator
Kinematics
Simplify
Coding
Camera
Cameras
Learning
Human
Perception
Face
Demonstrate
Experiment
Experiments

Keywords

  • face recognition
  • gesture recognition
  • human demonstration
  • multi-modal perception
  • object recognition

ASJC Scopus subject areas

  • Energy Engineering and Power Technology
  • Safety, Risk, Reliability and Quality
  • Control and Optimization
  • Computer Networks and Communications
  • Hardware and Architecture
  • Information Systems and Management

Cite this

Chen, J. H., Lu, G. Y., Chien, Y. H., Chiang, H. H., Wang, W. Y., & Hsu, C. C. (2019). Toward the flexible automation for robot learning from human demonstration using multimodal perception approach. In Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019 (pp. 148-153). [8823444] (Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICSSE.2019.8823444

Toward the flexible automation for robot learning from human demonstration using multimodal perception approach. / Chen, Jing Hao; Lu, Guan Yi; Chien, Yi Hsing; Chiang, Hsin Han; Wang, Wei Yen; Hsu, Chen Chien.

Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019. Institute of Electrical and Electronics Engineers Inc., 2019. p. 148-153 8823444 (Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Chen, JH, Lu, GY, Chien, YH, Chiang, HH, Wang, WY & Hsu, CC 2019, Toward the flexible automation for robot learning from human demonstration using multimodal perception approach. in Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019., 8823444, Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019, Institute of Electrical and Electronics Engineers Inc., pp. 148-153, 2019 International Conference on System Science and Engineering, ICSSE 2019, Dong Hoi City, Quang Binh Province, Viet Nam, 19/7/20. https://doi.org/10.1109/ICSSE.2019.8823444
Chen JH, Lu GY, Chien YH, Chiang HH, Wang WY, Hsu CC. Toward the flexible automation for robot learning from human demonstration using multimodal perception approach. In Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019. Institute of Electrical and Electronics Engineers Inc. 2019. p. 148-153. 8823444. (Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019). https://doi.org/10.1109/ICSSE.2019.8823444
Chen, Jing Hao ; Lu, Guan Yi ; Chien, Yi Hsing ; Chiang, Hsin Han ; Wang, Wei Yen ; Hsu, Chen Chien. / Toward the flexible automation for robot learning from human demonstration using multimodal perception approach. Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 148-153 (Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019).
@inproceedings{9ba056d190c141cc9e1a841f02e2840f,
title = "Toward the flexible automation for robot learning from human demonstration using multimodal perception approach",
abstract = "This study proposes a multi-modal perception approach to make a robotic arm perform flexible automation and further simplify the complicated coding process of controlling a robotic arm. The depth camera is utilized for detect face and hand gesture for recognizing operator identification and commands. In addition, the kinematics of the robotic arm associated with the position of manipulated objects can be derived based on the information through human demonstrations and detected objects. In the experiments, the proposed multi-modal perception system can firstly recognize the operator. Then, the operator can demonstrate a task to generate the learning data with the assistance of using gesture. Afterward, the robotic arm can perform the same task as human demonstration. During the process of imitating task, the robotic arm can also be guided by the gesture command of operator.",
keywords = "face recognition, gesture recognition, human demonstration, multi-modal perception, object recognition",
author = "Chen, {Jing Hao} and Lu, {Guan Yi} and Chien, {Yi Hsing} and Chiang, {Hsin Han} and Wang, {Wei Yen} and Hsu, {Chen Chien}",
year = "2019",
month = "7",
doi = "10.1109/ICSSE.2019.8823444",
language = "English",
series = "Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "148--153",
booktitle = "Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019",

}

TY - GEN

T1 - Toward the flexible automation for robot learning from human demonstration using multimodal perception approach

AU - Chen, Jing Hao

AU - Lu, Guan Yi

AU - Chien, Yi Hsing

AU - Chiang, Hsin Han

AU - Wang, Wei Yen

AU - Hsu, Chen Chien

PY - 2019/7

Y1 - 2019/7

N2 - This study proposes a multi-modal perception approach to make a robotic arm perform flexible automation and further simplify the complicated coding process of controlling a robotic arm. The depth camera is utilized for detect face and hand gesture for recognizing operator identification and commands. In addition, the kinematics of the robotic arm associated with the position of manipulated objects can be derived based on the information through human demonstrations and detected objects. In the experiments, the proposed multi-modal perception system can firstly recognize the operator. Then, the operator can demonstrate a task to generate the learning data with the assistance of using gesture. Afterward, the robotic arm can perform the same task as human demonstration. During the process of imitating task, the robotic arm can also be guided by the gesture command of operator.

AB - This study proposes a multi-modal perception approach to make a robotic arm perform flexible automation and further simplify the complicated coding process of controlling a robotic arm. The depth camera is utilized for detect face and hand gesture for recognizing operator identification and commands. In addition, the kinematics of the robotic arm associated with the position of manipulated objects can be derived based on the information through human demonstrations and detected objects. In the experiments, the proposed multi-modal perception system can firstly recognize the operator. Then, the operator can demonstrate a task to generate the learning data with the assistance of using gesture. Afterward, the robotic arm can perform the same task as human demonstration. During the process of imitating task, the robotic arm can also be guided by the gesture command of operator.

KW - face recognition

KW - gesture recognition

KW - human demonstration

KW - multi-modal perception

KW - object recognition

UR - http://www.scopus.com/inward/record.url?scp=85072932305&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85072932305&partnerID=8YFLogxK

U2 - 10.1109/ICSSE.2019.8823444

DO - 10.1109/ICSSE.2019.8823444

M3 - Conference contribution

AN - SCOPUS:85072932305

T3 - Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019

SP - 148

EP - 153

BT - Proceedings of 2019 International Conference on System Science and Engineering, ICSSE 2019

PB - Institute of Electrical and Electronics Engineers Inc.

ER -