以SOPC為基礎之高性能自主導航系統之設計與實現

Project: Government MinistryMinistry of Science and Technology

Project Details

Description

This project proposes a visual simultaneous localization and mapping(V-SLAM) system, which can be applied to robots under self-navigation. In an unknown condition, the robot is able to build a three-dimensional map and at the same time estimate its own state on the map. Also, with the auxiliary design of FPGA hardware acceleration circuit, we achieve the goal of low cost, low power consumption, and high calculation efficiency. In the V-SLAM system, the comparison of the images is mainly operated through Binocular vision’s scale-invariant feature transform(SIFT) calculation rule. The shot images will first go through Gaussian Blur. The purpose of this step is to ignore the layers and details of the image and reduce the noise in the image. Then, the two Gaussian Blur images with different weight will be do subtraction to acquire difference image. Through calculation, the extreme points in the image can be located. To distinguish the real feature points within these extreme points, we take the difference image and do differentiation to find the points with medium-high contrast rate. At the same time, we apply Harris Corner Detector to find the smooth points in the image. When a point meets the above three condition, then that point is the feature point of the image. To match the same feature point in different images, we use the Gaussian Blur image with less noise and detail layers to collect the statistics of pixel value variation on the eight direction of every pixel. Through this way, we can provide the feature points with more descriptive parameters. Since this project utilizes Binocular vision image, excluding the situation of objects being imaged within the focal length or one of the lens being covered, the image shot by the two lens should be similar. That is, the distance of feature point coordinates of the two images won’t be too far. Also, since the hardware of this system before matching is SISO, we utilize the above feature and propose a simple feature point matching method. First, through the method of serial access, we save the coordinates of the former feature points and descriptors of the image shot by the left lens to the small amount space of RAM. The RAM is triggered, when the first feature point is input to the image shot by the right lens. Parallel readout the coordinates and descriptor of the feature points in the memory, and then the matching point is found. In order to make the system more competitive, distance measurement is added. The actual distance between the object and the camera is calculated using the relationship between the matched feature points and binocular camera. In this way, no other sensing components need to be added for measurement, thereby effectively reducing the hardware and cost of the robot. In the future, this function can be extended to the V-SLAM system, which not only achieves the low-cost, low-power, and low-level platform implementation, but also performs the function of instant matching and ranging of feature points.
StatusFinished
Effective start/end date2017/08/012019/10/31

Keywords

  • visual simultaneous localization and mapping
  • map building
  • binocular vision
  • scale invariant feature transform
  • FPGA
  • feature detection
  • feature descriptor
  • feature matching
  • distance measurement

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.