Sudhakar, Raghavan

Person Preferred Name
Sudhakar, Raghavan
Model
Digital Document
Publisher
Florida Atlantic University
Description
The effects of impulse noise on receiving systems are
studied and impulse noise models commonly used in analysis
of such receiving systems are introduced. Various
techniques for identifying the optimum receiving structure
are presented and the concept of a nonlinear receiver
for enhancing receiver
environments is evolved.
performance in impulse noise
The effect of finite predetection
bandwidth on the performance of such nonlinear receiver
structures is studied in a qualitative fashion through
computer simulation. The performance of a linear receiver (matched filter) is
compared to that of nonlinear receiver structures employing
nonlinearities such as blanker and softlimiter;
noncoherent ASK modulation was used for the computer
simulation experiment.
The performance of the blanker and softlimiter is then
compared for different predetection bandwidths. An attempt
was made to optimize a particular receiver structure in
terms of the predetection bandwidth, for a given model
of corrupting noise parameters (Gauss~an and impulsive).
Model
Digital Document
Publisher
Florida Atlantic University
Description
The design of any communication receiver needs to addresses the issues of operating under the lowest possible signal-to-noise ratio. Among various algorithms that facilitate this objective are those used for iterative decoding of two-dimensional systematic convolutional codes in applications such as spread spectrum communications and Code Division Multiple Access (CDMA) detection. A main theme of any decoding schemes is to approach the Shannon limit in signal-to-noise ratio. All these decoding algorithms have various complexity levels and processing delay issues. Hence, the optimality depends on how they are used in the system. The technique used in various decoding algorithms is termed as iterative decoding. Iterative decoding was first developed as a practical means for decoding turbo codes. With the Log-Likelihood algebra, it is shown that a decoder can be developed that accepts soft inputs as a priori information and delivers soft outputs consisting of channel information, a posteriori information and extrinsic information to subsequent stages of iteration. Different algorithms such as Soft Output Viterbi Algorithm (SOVA), Maximum A Posteriori (MAP), and Log-MAP are compared and their complexities are analyzed in this thesis. A turbo decoder is implemented on the Digital Signal Processing (DSP) chip, TMS320C30 by Texas Instruments using a Modified-Log-MAP algorithm. For the Modified-Log-MAP-Algorithm, the optimal choice of the lookup table (LUT) is analyzed by experimenting with different LUT approximations. A low complexity decoder is proposed for a (7,5) code and implemented in the DSP chip. Performance of the decoder is verified under the Additive Wide Gaussian Noise (AWGN) environment. Hardware issues such as memory requirements and processing time are addressed for the chosen decoding scheme. Test results of the bit error rate (BER) performance are presented for a fixed number of frames and iterations.
Model
Digital Document
Publisher
Florida Atlantic University
Description
There are various algorithms used for the iterative decoding of two-dimensional systematic convolutional codes in applications such as spread-spectrum communications and CDMA detection. The main objective of these decoding schemes is to approach the Shannon limit in signal-to-noise ratio while keeping the system complexity and processing delay to a minimum. One such scheme proposed recently is termed Turbo (de)coding. Through the use of Log-likelihood algebra, it is shown that a decoder can be developed which accepts soft inputs as a priori information and delivers soft outputs consisting of channel information, a priori information and extrinsic information to subsequent stages of iteration. The output is then used as the a priori input information for the next iteration. Realization of the Turbo decoder is performed on the digital signal processing chip, TMS320C30 by Texas Instruments using a low complexity soft-input soft-output decoding algorithm. Hardware issues such as memory and processing time are addressed and how they are impacted by the chosen decoding scheme. Test results of the BER performance are presented for various block sizes and number of iterations.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The design of mobile communication receiver requires addressing the stringent issues of low signal-to-noise ratio (SNR) operation and low battery power consumption. Typically, forward error correction using convolutional coding with Viterbi decoding is employed to improve the error performance. However, even with moderate code lengths, the computation and storage requirement of conventional VD are substantial consuming appreciable fraction of DSP computations and hence battery power. The new error selective Viterbi decoding (ESVD) scheme developed recently (1) reduces the computational load substantially by taking advantage of the noise-free intervals to limit the trellis search. This thesis is concerned with the development of an efficient hardware architecture to implement a hard decision version of ESVD scheme for IS-54 coder. The implementations are optimized to reduce the computational complexity. The performance of the implemented ESVD scheme is verified for different channel conditions.
Model
Digital Document
Publisher
Florida Atlantic University
Description
In many scientific and signal processing applications, there are increasing demands for large volume and high speed computations, which call for not only high-speed low power computing hardware, but also for novel approaches in developing new algorithms and architectures. This thesis is concerned with the development of such architectures and algorithms suitable for the VLSI implementation of recursive and nonrecursive 1-dimension digital filters using multiple slower processing elements. As the background for the development, vectorization techniques such as state-space modeling, block processing, and look ahead computation are introduced. Concurrent architectures such as systolic arrays, wavefront arrays and appropriate parallel filter realizations such as lattice, all-pass, and wave filters are reviewed. A fully hardware efficient systolic array architecture termed as Multiplexed Block-State Filter is proposed for the high speed implementation of lattice and direct realizations of digital filters. The thesis also proposes a new simplified algorithm, Alternate Pole Pairing Algorithm, for realizing an odd order recursive filter as the sum of two all-pass filters. Performance of the proposed schemes are verified through numerical examples and simulation results.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis presents simulation results comparing the performance of different realizations and adaptive algorithms for channel equalization. An attempt is made to study and compare the performance of some filter structures used as an equalizer in fast data transmission over the baseband channel. To this end, simulation experiments are performed using minimum and non minimum phase channel models with adaptation algorithms such as the least mean square (LMS) and recursive least square (RLS) algorithms, filter structures such as the lattice and transversal filters and the input signals such as the binary phase shift keyed (BPSK) and quadrature phase shift keyed (QPSK) signals. Based on the simulation studies, conclusions are drawn regarding the performance of various adaptation algorithms.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis presents simulation results evaluating the performance of blind equalization techniques in the Digital Cellular environment. A new method of a simple zero memory non-linear detector for complex signals is presented for various forms of Fractionally Spaced Equalizers (FSE). Initial simulations are conducted with Binary Phase Shift Keying (BPSK) to study the characteristics of FSEs. The simulations are then extended to complex case via $\pi/$4-Differential Quaterny Phase Shift Keying ($\pi/$4-DQPSK) modulation. The primary focus in this thesis is the performance of this complex case when operating in Additive White Gaussian Noise (AWGN) and Rayleigh Multipath Fading channels.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis is concerned with the estimation of motion parameters of planar-object surfaces viewed with a binocular camera configuration. Possible application of this method includes autonomous guidance of a moving platform (AGVS) via imaging, and segmentation of moving objects by the use of the information concerning the motion and the structure. The brightness constraint equation is obtained by assuming the brightness of a moving patch as almost invariant. This equation is solved for single camera case as well as binocular camera case by knowing values of the surface normal or by iteratively determining it using the estimates of motion parameters. For this value of the surface normal, rotational and translational motion components are determined over the entire image using a least squares algorithm. This algorithm is tested for simulated images as well as real images pertaining to a single camera as well as binocular camera situations. (Abstract shortened with permission of author.)
Model
Digital Document
Publisher
Florida Atlantic University
Description
The fundamental goal of a machine vision system in the inspection of an assembled printed circuit board is to locate the integrated circuit(IC) components. These components are then checked for their position and orientation with respect to a given position and orientation of the model and to detect deviations. To this end, a method based on a modified two-level correlation scheme is presented in this thesis. In the first level, Low-Level correlation, a modified two-stage template matching method is proposed. It makes use of the random search techniques, better known as the Monte Carlo method, to speed up the matching process on binarized version of the images. Due to the random search techniques, there is uncertainty involved in the location where the matches are found. In the second level, High-Level correlation, an evidence scheme based on the Dempster-Shafer formalism is presented to resolve the uncertainty. Experiment results performed on a printed circuit board containing mounted integrated components is also presented to demonstrate the validity of the techniques.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis deals with the recognition of digitized handprinting characters. Digitized character images are thresholded, binarized and converted into 32 x 32 matrices. The binarized character matrices are preprocessed to remove noise and thin down to one pixel per linewidth. For dominant features, namely, (1) number of loops, (2) number of end-pixels, (3) number of 3-branch-pixels, and (4) number of 4-branch-pixels, are used as criteria to pre-classify characters into 14 groups. Characters belonging to larger groups are encoded into chain code and compiled into a data base. Recognition of characters belonging to larger groups is achieved by data base look-up and or decision tree tests if ambiguities occur in the data base entries. Recognition of characters belonging to the smaller groups is doned by decision tree tests.