TY - GEN
T1 - A vision-based human action recognition system for companion robots and human interaction
AU - Chiang, Meng Lin
AU - Feng, Jian Kai
AU - Zeng, Wen Lin
AU - Fang, Chiung Yao
AU - Chen, Sei Wang
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/12
Y1 - 2018/12
N2 - This paper presents a vision-based human action recognition system to support human interaction for companion robots. The system is divided into three parts: motion map construction, feature extraction, and human action classification. First, the Kinect 2.0 captures depth images and color images simultaneously using its depth sensor and RGB camera. Second, the information in the depth images and the color images are respectively used to three depth motion maps and a color motion map. These are then combined into one image to calculate the corresponding histogram of oriented gradient (HOG) features. Finally, a support vector machine (SVM) recognizes these HOG features as human actions. The proposed system can recognize eight kinds of human actions, wave left hand, wave right hand, holding left hand, holding right hand, hugging, bowing, walking, and punching. Three databases were used to test the proposed system: Database1 includes videos of adult actions, Database2 includes videos of child actions, and Database3 includes human action videos taken with a moving camera. The recognition accuracy rates of the three tests were 88.7%, 74.37%, and 51.25%, respectively. The experimental results show that the proposed system is efficient and robust.
AB - This paper presents a vision-based human action recognition system to support human interaction for companion robots. The system is divided into three parts: motion map construction, feature extraction, and human action classification. First, the Kinect 2.0 captures depth images and color images simultaneously using its depth sensor and RGB camera. Second, the information in the depth images and the color images are respectively used to three depth motion maps and a color motion map. These are then combined into one image to calculate the corresponding histogram of oriented gradient (HOG) features. Finally, a support vector machine (SVM) recognizes these HOG features as human actions. The proposed system can recognize eight kinds of human actions, wave left hand, wave right hand, holding left hand, holding right hand, hugging, bowing, walking, and punching. Three databases were used to test the proposed system: Database1 includes videos of adult actions, Database2 includes videos of child actions, and Database3 includes human action videos taken with a moving camera. The recognition accuracy rates of the three tests were 88.7%, 74.37%, and 51.25%, respectively. The experimental results show that the proposed system is efficient and robust.
KW - Color motion map
KW - Companion robots
KW - Depth motiono map
KW - Histogram of oriented gradient (HOG)
KW - Human action recognition
KW - Support vector machine (SVM)
UR - http://www.scopus.com/inward/record.url?scp=85070823180&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85070823180&partnerID=8YFLogxK
U2 - 10.1109/CompComm.2018.8780777
DO - 10.1109/CompComm.2018.8780777
M3 - Conference contribution
AN - SCOPUS:85070823180
T3 - 2018 IEEE 4th International Conference on Computer and Communications, ICCC 2018
SP - 1445
EP - 1452
BT - 2018 IEEE 4th International Conference on Computer and Communications, ICCC 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 4th IEEE International Conference on Computer and Communications, ICCC 2018
Y2 - 7 December 2018 through 10 December 2018
ER -