Automatic road segmentation of traffic images

Chiung Yao Fang, Han Ping Chou, Jung Ming Wang, Sei Wang Chen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Automatic road segmentation plays an important role in many vision-based traffic applications. It provides a priori information for preventing the interferences of irrelevant objects, activities, and events that take place outside road areas. The proposed road segmentation method consists of four major steps: background-shadow model generation and updating, moving object detection and tracking, background pasting, and road location. The full road surface is finally recovered from the preliminary one using a progressive fuzzytheoretic shadowed sets technique. A large number of video sequences of traffic scenes under various conditions have been employed to demonstrate the feasibility of the proposed road segmentation method.

Original languageEnglish
Title of host publicationVISAPP 2015 - 10th International Conference on Computer Vision Theory and Applications; VISIGRAPP, Proceedings
EditorsJose Braz, Sebastiano Battiato, Francisco Imai
PublisherSciTePress
Pages469-477
Number of pages9
ISBN (Electronic)9789897580901
DOIs
Publication statusPublished - 2015
Event10th International Conference on Computer Vision Theory and Applications, VISAPP 2015 - Berlin, Germany
Duration: 2015 Mar 112015 Mar 14

Publication series

NameVISAPP 2015 - 10th International Conference on Computer Vision Theory and Applications; VISIGRAPP, Proceedings
Volume2

Other

Other10th International Conference on Computer Vision Theory and Applications, VISAPP 2015
Country/TerritoryGermany
CityBerlin
Period2015/03/112015/03/14

Keywords

  • Background-shadow model
  • Fuzzy decision
  • Shadow set

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Automatic road segmentation of traffic images'. Together they form a unique fingerprint.

Cite this