Automotive sensors

Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis describes the conceptualization, design and implementation of a low-cost vision-based autonomous vehicle named LOOMY. A golf cart has been ouffitted with a personal computer, a fixed foward-looking camera, and the necessary actuators to facilitate driving operations. Steering, braking, and speed control actuators are being driven in open-loop with no sort of local feedback. The only source of feedback to the system is through the image sequence obtained from the camera. The images are processed and the relative information is extracted and applied to the navigation task. The implemented task is to follow another vehicle, tracing its actions while avoiding collisions using the visual looming cue.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This research explores the existing active-vision-based algorithms employed in today's autonomous navigation systems. Some of the popular range finding algorithms are introduced and presented with examples. In light of the existing methods, an active-vision-based method, which extracts visual cues from a sequence of 2D images, is proposed for autonomous navigation. The proposed algorithm merges the method titled 'Visual Threat Cues (VTCs) for Autonomous Navigation' developed by Kundur (1), with the structured-light-based methods. By combining these methods, a more practical and a simpler method for indoors autonomous navigation tasks is developed. A textured-pattern, which is projected onto the object surface by a slide projector, is used as the structured-light source, and the proposed approach is independent of the textured-pattern used. Several experiments are performed with the autonomous robot LOOMY to test the proposed algorithm, and the results are very promising.