Autonomous robots

Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis concerns the design, construction, control, and testing of a novel self-contained soft robotic vehicle; the JenniFish is a free-swimming jellyfish-like soft robot that could be adapted for a variety of uses, including: low frequency, low power sensing applications; swarm robotics; a STEM classroom learning resource; etc. The final vehicle design contains eight PneuNet-type actuators radially situated around a 3D printed electronics canister. These propel the vehicle when inflated with water from its surroundings by impeller pumps; since the actuators are connected in two neighboring groups of four, the JenniFish has bi-directional movement capabilities. Imbedded resistive flex sensors provide actuator position to the vehicle’s PD controller. Other onboard sensors include an IMU and an external temperature sensor. Quantitative constrained load cell tests, both in-line and bending, as well as qualitative free-swimming video tests were conducted to find baseline vehicle performance capabilities. Collected metrics compare well with existing robotic jellyfish.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis describes the conceptualization, design and implementation of a low-cost vision-based autonomous vehicle named LOOMY. A golf cart has been ouffitted with a personal computer, a fixed foward-looking camera, and the necessary actuators to facilitate driving operations. Steering, braking, and speed control actuators are being driven in open-loop with no sort of local feedback. The only source of feedback to the system is through the image sequence obtained from the camera. The images are processed and the relative information is extracted and applied to the navigation task. The implemented task is to follow another vehicle, tracing its actions while avoiding collisions using the visual looming cue.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This research explores the existing active-vision-based algorithms employed in today's autonomous navigation systems. Some of the popular range finding algorithms are introduced and presented with examples. In light of the existing methods, an active-vision-based method, which extracts visual cues from a sequence of 2D images, is proposed for autonomous navigation. The proposed algorithm merges the method titled 'Visual Threat Cues (VTCs) for Autonomous Navigation' developed by Kundur (1), with the structured-light-based methods. By combining these methods, a more practical and a simpler method for indoors autonomous navigation tasks is developed. A textured-pattern, which is projected onto the object surface by a slide projector, is used as the structured-light source, and the proposed approach is independent of the textured-pattern used. Several experiments are performed with the autonomous robot LOOMY to test the proposed algorithm, and the results are very promising.