This paper presents a vision-based human action recognition system to support human interaction for companion robots. The system is divided into three parts: motion map construction, feature extraction, and human action classification. First, the Kinect 2.0 captures depth images and color images simultaneously using its depth sensor and RGB camera. Second, the information in the depth images and the color images are respectively used to three depth motion maps and a color motion map. These are then combined into one image to calculate the corresponding histogram of oriented gradient (HOG) features. Finally, a support vector machine (SVM) recognizes these HOG features as human actions. The proposed system can recognize eight kinds of human actions, wave left hand, wave right hand, holding left hand, holding right hand, hugging, bowing, walking, and punching. Three databases were used to test the proposed system: Database1 includes videos of adult actions, Database2 includes videos of child actions, and Database3 includes human action videos taken with a moving camera. The recognition accuracy rates of the three tests were 88.7%, 74.37%, and 51.25%, respectively. The experimental results show that the proposed system is efficient and robust.