TY - GEN
T1 - Enhanced visual odometry algorithm based on elite selection method and voting system
AU - Shen, Hao
AU - Hsu, Chen Chien
AU - Wang, Wei Yen
AU - Wang, Yin Tien
N1 - Funding Information:
This research is supported by the “Center of Learning Technology for Chinese” and “Aim for the Top University Project” of National Taiwan Normal University (NTNU), sponsored by the Ministry of Education, Taiwan, and Ministry of Science and Technology, Taiwan, under Grants no. MOST 105-2221-E-003 -010.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/12/14
Y1 - 2017/12/14
N2 - In this paper, we address the problems of camera pose estimation accuracy and runtime efficiency by incorporating an elite selection method and a voting system to a conventional visual odometry (VO) method, called the 'enhanced VO algorithm'. The use of elite selection method improves the efficiency of perspective-3-point (P3P) algorithm by only employing an elite subset of landmarks to estimate the camera pose. The proposed voting system, on the other hand, provides reliable consensus set derived from random sample consensus (RANSAC) algorithm such that accuracy of camera pose estimations can be increased. To verify the performances of the proposed approach, we conducted various experiments using a Kinect RGB-D sensor, and the results show that the proposed VO system performs well in terms of not only estimation accuracy but also computational time.
AB - In this paper, we address the problems of camera pose estimation accuracy and runtime efficiency by incorporating an elite selection method and a voting system to a conventional visual odometry (VO) method, called the 'enhanced VO algorithm'. The use of elite selection method improves the efficiency of perspective-3-point (P3P) algorithm by only employing an elite subset of landmarks to estimate the camera pose. The proposed voting system, on the other hand, provides reliable consensus set derived from random sample consensus (RANSAC) algorithm such that accuracy of camera pose estimations can be increased. To verify the performances of the proposed approach, we conducted various experiments using a Kinect RGB-D sensor, and the results show that the proposed VO system performs well in terms of not only estimation accuracy but also computational time.
KW - Kinect RGB-D sensor
KW - Perspective-3-point
KW - RANSAC
KW - SURF
KW - Visual odometry
UR - http://www.scopus.com/inward/record.url?scp=85044006460&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85044006460&partnerID=8YFLogxK
U2 - 10.1109/ICCE-Berlin.2017.8210602
DO - 10.1109/ICCE-Berlin.2017.8210602
M3 - Conference contribution
AN - SCOPUS:85044006460
T3 - IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin
SP - 99
EP - 100
BT - 2017 IEEE 7th International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2017
PB - IEEE Computer Society
T2 - 7th IEEE International Conference on Consumer Electronics - Berlin, ICCE-Berlin 2017
Y2 - 3 September 2017 through 6 September 2017
ER -