摘要
Lidar sensors are commonly equipped on a mobile mapping system (MMS) to establish point clouds for HD map creation. However, the point clouds themselves do not contain object attributes. Therefore, human operators have to manually obtain objects' position to assign attributes for further HD map conversion, inevitably resulting in time-consuming processes and significant labor costs. To solve the above problems, in this paper, we present an MMS equipped with non-survey grade Lidar, commercial grade camera, and entry level GNSS/INS, which incorporates ground control points (GCPs) with a Normal Distribution Transform Simultaneously Localization and Mapping (NDT SLAM) refinement and fluctuation adjustment to secure both absolute position accuracy and relative position accuracy of the reconstructed point cloud. Meanwhile, a deep neural network for image detection is employed to obtain the bounding box of traffic signs from each image frame. By applying the translation and rotation transformation between Lidar points and camera pixels, intersection of the detected object in the image and Lidar scan points can be found. By accumulating extracted Lidar points of the traffic sign in several detection frames, we can then obtain an accurate 3D geodetic coordinate of the traffic signs. Experimental results show that point clouds can be reconstructed with an average 3D RMSE of only 8.6cm, and center geodetic coordinates of traffic signs can be further extracted in sub-meter accuracy to significantly reduce labor work in HD map creation.
原文 | 英語 |
---|---|
頁(從 - 到) | 117374-117384 |
頁數 | 11 |
期刊 | IEEE Access |
卷 | 10 |
DOIs | |
出版狀態 | 已發佈 - 2022 |
ASJC Scopus subject areas
- 一般電腦科學
- 一般材料科學
- 一般工程
- 電氣與電子工程