Stereo vision for mobile robots is challenging, particularly when employing embedded systems with limited processing power. Objects in the field of vision must be extracted and represented in a fashion useful to the observer, while at the same time, methods must be in place for dealing with the large volume of data that stereo vision necessitates, in order that a practical frame rate may be obtained. We are working with stereo vision as the sole form of perception for Urban Search and Rescue (USAR) vehicles. This paper describes our procedure for extracting and matching object data using a stereo vision system. Initial results aro provided to demonstrate the potential of this system for USAR and other challenging domains.