3D reconstruction is an important topic in the field of the emerging applications such as smart robotics, virtual reality (VR), augmented reality (AR), and autonomous driving. RGB-D simultaneous localization and mapping (SLAM) technique is widely used in the reconstruction process. However, low light and low textured environment often results in insufficient point features and fails the reconstruction. To address this problem, we propose a robust RGB-D SLAM system using high dynamic range (HDR) image information called HDR-based SLAM. The deep learning based HDR generation method is adopted to map a single low dynamic range (LDR) image into a radiance map which is normalized to exclude the influence of exposure time. We retrained the ORB descriptor patch to fit the normalized radiance maps in the feature matching step. The proposed method can improve the quantitative camera trajectory accuracy and qualitative result of geometry reconstruction. Experimental results show that the proposed method has better performance compared to that of the standard range imaging SLAM under challenging low light environment, which helps expand the applicability of 3D reconstruction system.
ASJC Scopus subject areas