Raviv, Daniel

Person Preferred Name
Raviv, Daniel
Model
Digital Document
Publisher
Florida Atlantic University
Description
Vision is a critical sense for many species, with the perception of motion being a fundamental aspect. This aspect often provides richer information than static images for understanding the environment. Motion recognition is a relatively simple computation compared to shape recognition. Many creatures can discriminate moving objects quite well while having virtually no capacity for recognizing stationary objects.
Traditional methods for collision-free navigation require the reconstruction of a 3D model of the environment before planning an action. These methods face numerous limitations as they are computationally expensive and struggle to scale in unstructured and dynamic environments with a multitude of moving objects.
This thesis proposes a more scalable and efficient alternative approach without 3D reconstruction. We focus on visual motion cues, specifically ’visual looming’, the relative expansion of objects on an image sensor. This concept allows for the perception of collision threats and facilitates collision-free navigation in any environment, structured or unstructured, regardless of the vehicle’s movement or the number of moving objects present.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Data centers’ mission critical nature, significant power consumption, and increasing reliance on them for storing digital information, have created a need to monitor and manage these facilities. Metrics are a key part of this effort to raise flags that lead to optimization of resource utilization. While existing metrics have contributed to improvements regarding data center efficiency, they are very specific and overlook important aspects such as the overall performance and the risks to which the data center is exposed. With several variables affecting performance, there is an urgent need for new and improved metrics, capable to provide a holistic understanding of the data center behavior.
This research proposes a novel framework using a multidimensional approach for a new family of data center metrics. Performance is examined across four different subdimensions: productivity, efficiency, sustainability, and operations. Risk associated with each of those sub-dimensions is contemplated. External risks are introduced, namely site risk, as another dimension of the metrics. Results from metrics across all sub-dimensions can be normalized to the same scale and incorporated in one graph, which simplifies visualization and reporting. This research also explores theoretical modeling of data center components using a cyber-physical systems lens to estimate and predict different variables including key performance indicators. Data center simulation models are deployed in MATLAB and Simulink to assess data centers under certain a-priori known conditions. The results of the simulations, with different workloads and IT resources, show quality of service as well as power, airflow and energy parameters. Ultimately, this research describes how key parameters associated with data center infrastructure and information technology equipment can be monitored in real-time across an entire facility using low-power wireless sensors. Real-time data collection may contribute in calibrating and validating models. The new family of data center metrics gives a more comprehensive and evidence-based view of issues affecting data centers, highlights areas where mitigating actions can be implemented, and allows reexamining their overall behavior. It can help to standardize a process that evolves into a best practice for evaluating data centers, comparing them to each other, and improving grounds for decision-making.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Many times we decide to go to a place depending on how crowded the place is.
Our decisions are made based on different aspects that are only known in real time. A
system that provides users or agencies information about the actual number of people in
the scene over the time will allow them to make a decision or have information about a
given location. This thesis presents a low complexity system for human counting and
human detection using public cameras which usually do not have good quality. The use
of computer vision techniques makes it possible to have a system that allows the user to
have an estimate number of people. Different videos were studied with different
resolutions and camera positions. The best video result shows an error of 0.269%, while
the worst one is 8.054 %. The results show that relatively inexpensive cameras streaming
video at a low bitrate can be used to develop large scale people counting applications.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This document reports on a hands-on project aimed at learning and
experiencing the concept of system-of-systems. The motivation behind this
project is to study and implement the concept of System of Systems in the
generation of a RF-based communication and control complex system. The goal
of this project is to develop a multi-level integrated and complete system in which
the vehicles that belong to a same network can become aware of their location,
communicate with nearby vehicles (sometimes with no visible line of sight), be
notified of the presence of different objects located in their immediate vicinity
(obstacles, such as abundant vehicles), and generate a two dimensional
representation of the vehicles’ location for a remote user. In addition, this system
will be able to transmit back messages from the remote user to a specific or to all
local vehicles. The end result is a demonstration of a complex, functional, and
robust system built and tested for other projects to use and learn from.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Multi-agent control is a very promising area of robotics. In applications for which it is difficult or impossible for humans to intervene, the utilization of multi-agent, autonomous robot groups is indispensable. This thesis presents a novel approach to reactive multi-agent control that is practical and elegant in its simplicity. The basic idea upon which this approach is based is that a group of robots can cooperate to determine the shortest path through a previously unmapped environment by virtue of redundant sharing of simple data between multiple agents. The idea was implemented with two robots. In simulation, it was tested with over sixty agents. The results clearly show that the shortest path through various environments emerges as a result of redundant sharing of information between agents. In addition, this approach exhibits safeguarding techniques that reduce the risk to robot agents working in unknown and possibly hazardous environments. Further, the simplicity of this approach makes implementation very practical and easily expandable to reliably control a group comprised of many agents.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis describes the conceptualization, design and implementation of a low-cost vision-based autonomous vehicle named LOOMY. A golf cart has been ouffitted with a personal computer, a fixed foward-looking camera, and the necessary actuators to facilitate driving operations. Steering, braking, and speed control actuators are being driven in open-loop with no sort of local feedback. The only source of feedback to the system is through the image sequence obtained from the camera. The images are processed and the relative information is extracted and applied to the navigation task. The implemented task is to follow another vehicle, tracing its actions while avoiding collisions using the visual looming cue.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This research explores the existing active-vision-based algorithms employed in today's autonomous navigation systems. Some of the popular range finding algorithms are introduced and presented with examples. In light of the existing methods, an active-vision-based method, which extracts visual cues from a sequence of 2D images, is proposed for autonomous navigation. The proposed algorithm merges the method titled 'Visual Threat Cues (VTCs) for Autonomous Navigation' developed by Kundur (1), with the structured-light-based methods. By combining these methods, a more practical and a simpler method for indoors autonomous navigation tasks is developed. A textured-pattern, which is projected onto the object surface by a slide projector, is used as the structured-light source, and the proposed approach is independent of the textured-pattern used. Several experiments are performed with the autonomous robot LOOMY to test the proposed algorithm, and the results are very promising.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis studies the 2-D-based visual invariant that exists during relative motion between a camera and a 3-D object. We show that during fixation there is a measurable nonlinear function of optical flow that produces the same value for all points of a stationary environment regardless of the 3-D shape of the environment. During fixated camera motion relative to a rigid object, e.g., a stationary environment, the projection of the fixated point remains (by definition) at the same location in the image, and all other points located on the 3-D rigid object can only rotate relative to that 3-D fixation point. This rotation rate of the points is invariant for all points that lie on the particular environment, and it is measurable from a sequence of images. This new invariant is obtained from a set of monocular images and is expressed explicitly as a closed form solution.
Model
Digital Document
Publisher
Florida Atlantic University
Description
In this thesis we describe a local-neighborhood-pixel-based adaptive algorithm to track image features, both spatially and temporally, over a sequence of monocular images. The algorithm assumes no a priori knowledge about the image features to be tracked, or the relative motion between the camera and the 3-D objects. The features to be tracked are selected by the algorithm and they correspond to the peaks of '2-D intensity correlation surface' constructed from a local neighborhood in the first image of the sequence to be analyzed. Any kind of motion, i.e., 6 DOF (translation and rotation), can be tolerated keeping in mind the pixels-per-frame motion limitations. No subpixel computations are necessary. Taking into account constraints of temporal continuity, the algorithm uses simple and efficient predictive tracking over multiple frames. Trajectories of features on multiple objects can also be computed. The algorithm accepts a slow, continuous change of brightness D.C. level in the pixels of the feature. Another important aspect of the algorithm is the use of an adaptive feature matching threshold that accounts for change in relative brightness of neighboring pixels. As applications of the feature-tracking algorithm and to test the accuracy of the tracking, we show how the algorithm has been used to extract the Focus of Expansion (FOE) and compute the Time-to-contact using real image sequences of unstructured, unknown environments. In both these applications, information from multiple frames is used.
Model
Digital Document
Publisher
Florida Atlantic University
Description
In this dissertation, visual cues using an active monocular camera for autonomous vehicle navigation are investigated. A number of visual cues suitable to such an objective are proposed and effective methods to extract them are developed. Unique features of these visual cues include: (1) There is no need to reconstruct the 3D scene; (2) they utilize short image sequences taken by a monocular camera; and (3) they operate on local image brightness information. Taking these features into account, the algorithms developed are computationally efficient. Simulation and experimental studies confirm the efficacy of the algorithms developed. The major contribution of the research work in this dissertation is the extraction of visual information suitable for autonomous navigation in an active monocular camera without 3D reconstruction by use of local image information. In the studies addressed, the first visual cue is related to camera focusing parameters. An objective function relating focusing parameters to local image brightness is proposed. A theoretical development is conducted to show that by maximizing the objective function one can focus successfully the camera by choosing the focusing parameters. As a result, the dense distance map between a camera and a front scene can be estimated without using the Gaussian spread function. The second visual cue, namely, the clearance invariant (first proposed by Raviv (97)), is extended here to include arbitrary translational motion of a camera. It is shown that the angle between the optical axis and moving direction of a camera can be estimated by minimizing the relevant estimated error residual. This method needs only one image projection from a 3D surface point at an arbitrary time instant. The third issue discussed in this dissertation refers to extracting the looming and the magnitude of rotation using a new visual cue designated as the rotation invariant under the camera fixation. An algorithm to extract the looming is proposed using the image information available from only one 3D surface point at an arbitrary time instant. Further, an additional algorithm is proposed to estimate the magnitude of rotational velocity of the camera by using the image projections of only two 3D surface points measured over two time instants. Finally, a method is presented to extract the focus of expansion robustly without using image brightness derivatives. It decomposes an image projection trajectory into two independent linear models, and applies the Kalman filters to estimate the focus of expansion.