This paper presents an automatic road sign detection and recognition system that is based on a computational model of human visual recognition processing. Road signs are typically placed either by the roadside or above roads. They provide important information for guiding, warning, or regulating the behaviors drivers in order to make driving safer and easier. The proposed recognition system is motivated by human recognition processing. The system consists of three major components: sensory, perceptual, and conceptual analyzers. The sensory analyzer extracts the spatial and temporal information of interest from video sequences. The extracted information then serves as the input stimuli to a spatiotemporal attentional (STA) neural network in the perceptual analyzer. If stimulation continues, focuses of attention will be established in the neural network. Potential features of road signs are then extracted from the image areas corresponding to the focuses of attention. The extracted features are next fed into the conceptual analyzer. The conceptual analyzer is composed of two modules: a category module and an object module. The former uses a configurable adaptive resonance theory (CART) neural network to determine the category of the input stimuli, whereas the later uses a configurable heteroassociative memory (CHAM) neural network to recognize an object in the determined category of objects. The proposed computational model has been used to develop a system for automatically detecting and recognizing road signs from sequences of traffic images. The experimental results revealed both the feasibility of the proposed computational model and the robustness of the developed road sign detection system.
ASJC Scopus subject areas