TY - JOUR
T1 - Vision-Based Mobile Collaborative Robot Incorporating a Multicamera Localization System
AU - Hsu, Chen Chien James
AU - Hwang, Pin Jui
AU - Wang, Wei Yen
AU - Wang, Yin Tien
AU - Lu, Cheng Kai
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023/9/15
Y1 - 2023/9/15
N2 - As the Industry 4.0 landscape unfolds, collaborative robots (cobots) play an important role in intelligent manufacturing. Compared with industrial robots, cobots are more flexible and intuitive in programming, especially for industrial and home service applications; however, there are still issues to be solved, including understanding human intention in a natural way, adaptability to execute tasks, and robot mobility in a working environment. As an attempt to solve the problems aforementioned, in this article, we propose a modularized solution for mobile cobot systems, where the cobot equipped with a multicamera localization scheme for self-localization can understand the human intention via human voice commands to execute the tasks in an unseen scenario in a small-area working environment. As far as intention understanding is concerned, we devise a natural language processing approach to establish an action base to describe human commands. According to the action base, the robot can then execute the tasks by planning a trajectory with the help of an object localization module, which integrates the point cloud and the object detected by YOLOv4 to locate the object's position in 3-D space. Depending on where the cobot interacts with the object, the cobot might need to navigate around the working environment. Thus, we also establish a low-cost and high-efficiency multicamera localization system with ArUco markers to locate the mobile cobot in a larger sensing area. The experimental results show that the proposed vision-based mobile cobot can successfully interact with a human operator to assemble a wooden chair in a small workshop.
AB - As the Industry 4.0 landscape unfolds, collaborative robots (cobots) play an important role in intelligent manufacturing. Compared with industrial robots, cobots are more flexible and intuitive in programming, especially for industrial and home service applications; however, there are still issues to be solved, including understanding human intention in a natural way, adaptability to execute tasks, and robot mobility in a working environment. As an attempt to solve the problems aforementioned, in this article, we propose a modularized solution for mobile cobot systems, where the cobot equipped with a multicamera localization scheme for self-localization can understand the human intention via human voice commands to execute the tasks in an unseen scenario in a small-area working environment. As far as intention understanding is concerned, we devise a natural language processing approach to establish an action base to describe human commands. According to the action base, the robot can then execute the tasks by planning a trajectory with the help of an object localization module, which integrates the point cloud and the object detected by YOLOv4 to locate the object's position in 3-D space. Depending on where the cobot interacts with the object, the cobot might need to navigate around the working environment. Thus, we also establish a low-cost and high-efficiency multicamera localization system with ArUco markers to locate the mobile cobot in a larger sensing area. The experimental results show that the proposed vision-based mobile cobot can successfully interact with a human operator to assemble a wooden chair in a small workshop.
KW - Collaborative robot (cobot)
KW - mobile cobot
KW - multicamera localization
KW - natural language processing (NLP)
UR - http://www.scopus.com/inward/record.url?scp=85167778162&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85167778162&partnerID=8YFLogxK
U2 - 10.1109/JSEN.2023.3300301
DO - 10.1109/JSEN.2023.3300301
M3 - Article
AN - SCOPUS:85167778162
SN - 1530-437X
VL - 23
SP - 21853
EP - 21861
JO - IEEE Sensors Journal
JF - IEEE Sensors Journal
IS - 18
ER -