Imaging systems

Model
Digital Document
Publisher
Florida Atlantic University
Description
In this thesis we report a VLSI design implementation of an application specific, full-frame architecture CCD image sensor for a handwritten Optical Character Recognition system. The design is targeted to the MOSIS 2mu, 2-poly/ 2-metal n-buried channel CCD/CMOS technology. The front side illuminated CCD image sensor uses a transparent polysilicon gate structure and is comprised of 84 (H) x 100 (V) pixels arranged in a hexagonal lattice structure. The sensor has unit pixel dimensions of 18 lambda (H) x 16 lambda (V). A second layer of metal is used for shielding certain areas from incident light, and the effective pixel photosite area is 8 lambda x 8 lambda. The imaging pixels use a 3-phase structure (with an innovative addressing scheme for the hexagonal lattice) for image sensing and horizontal charge shift. Columns of charge are shifted into the vertical 2-phase CCD shift registers, which shift the charge out serially at high speed. The chip has been laid out on the 'tinychip' (2250 mu m x 2220 (mu m) pad frame and fabrication through MOSIS is planned next.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis is concerned with adapting a sequential code that calculates the Radar Cross Section (RCS) of an open-ended rectangular waveguide cavity to a massively parallel computational platform. The primary motivation for doing this is to obtain wideband data over a large range of incident angles in order to generate a two-dimensional radar cross section image. Images generated from measured and computed data will be compared to evaluate program performance. The computer used in this implementation is a MasPar MP-1 single instruction, multiple data massively parallel computer consisting of 4,096 processors arranged in a two-dimensional mesh. The algorithm uses the mode matching method of analysis to match fields over the cavity aperture to obtain an expression for the scattered far field.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The objective of this dissertation is to develop effective algorithms for estimating the 3-D structure of a scene and its relative motion with respect to a camera or a pair of cameras from a sequence of images acquired by the cameras, under the assumption that the relative motion of the camera is small from one frame to another. This dissertation presents an approach of computing depth maps from an image sequence, which combines the direct depth estimation method with the optical flow based method. More specifically, optical flow on and near moving edges are computed using a correlation technique. The optical flow information is then fused with the gradient information to estimate depth not only on moving edges but also in internal regions. Depth estimation is formulated as a discrete Kalman filter problem and is solved in three stages. In the prediction stage, the depth map estimated for the previous frame, together with knowledge of the camera motion, is used to predict the depth variance at each pixel in the current frame. In the estimation stage, a vector-version of Kalman filter formulation is adapted and simplified to refine the predicted depth map. The resulting estimation algorithm takes into account the information from the neighboring pixels, and thus is much more robust than the scalar-version Kalman filter implementation. In the smoothing stage, morphological filtering is applied to reduce the effect of measurement noise and fill in uncertain areas based on the error covariance information. Since the depth at each pixel is estimated locally, the algorithm presented in this paper can be implemented on a parallel computer. The performance of the presented method is assessed through simulation and experimental studies. A new approach for motion estimation from stereo image sequences is also proposed in this dissertation. First a stereo motion estimation model is derived using the direct dynamic motion estimation technique. The problem is then solved by applying a discrete Kalman filter that facilitates the use of a long stereo image sequence. Typically, major issues in such an estimation method are stereo matching, temporal matching, and noise sensitivity. In the proposed approach, owing to the use of temporal derivatives in the motion estimation model, temporal matching is not needed. The effort for stereo matching is kept to a minimum with a parallel binocular configuration. Noise smoothing is achieved by the use of a sufficiently large number of measurement points and a long sequence of stereo images. Both simulation and experimental studies have also been conducted to assess the effectiveness of the proposed approach.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Contemporary computer vision solutions to the problem of object detection aim at incorporating contextual information into the process. This thesis proposes a systematic evaluation of the usefulness of incorporating knowledge about the geometric context of a scene into a baseline object detection algorithm based on local features. This research extends publicly available MATLABRÂȘ implementations of leading algorithms in the field and integrates them in a coherent and extensible way. Experiments are presented to compare the performance and accuracy between baseline and context-based detectors, using images from the recently published SUN09 dataset. Experimental results demonstrate that adding contextual information about the geometry of the scene improves the detector performance over the baseline case in 50% of the tested cases.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Digital video is being used widely in a variety of applications such as entertainment, surveillance and security. Large amount of video in surveillance and security requires systems capable to processing video to automatically detect and recognize events to alleviate the load on humans and enable preventive actions when events are detected. The main objective of this work is the analysis of computer vision techniques and algorithms used to perform automatic detection of events in video sequences. This thesis presents a surveillance system based on optical flow and background subtraction concepts to detect events based on a motion analysis, using an event probability zone definition. Advantages, limitations, capabilities and possible solution alternatives are also discussed. The result is a system capable of detecting events of objects moving in opposing direction to a predefined condition or running in the scene, with precision greater than 50% and recall greater than 80%.