Coding theory

Model
Digital Document
Publisher
Florida Atlantic University
Description
With the publication of Shor's quantum algorithm for solving discrete logarithms in
finite cyclic groups, a need for new cryptographic primitives arose; namely, for more secure
primitives that would prevail in the post-quantum era.
The aim of this dissertation is to exploit some hard problems arising from group theory
for use in cryptography. Over the years, there have been many such proposals. We first look
at two recently proposed schemes based on some form of a generalization of the discrete
logari thm problem (DLP), identify their weaknesses, and cryptanalyze them.
By applying the exper tise gained from the above cryptanalyses, we define our own
generalization of the DLP to arbitrary finite groups. We show that such a definition leads
to the design of signature schemes and pseudo-random number generators with provable
security under a security assumption based on a group theoretic problem. In particular,
our security assumption is based on the hardness of factorizing elements of the projective
special linear group over a finite field in some representations. We construct a one-way
function based on this group theoretic assumption and provide a security proof.
Model
Digital Document
Publisher
Florida Atlantic University
Description
As quantum computers continue to develop, they pose a threat to cryptography since many popular cryptosystems will be rendered vulnerable. This is because the security of most currently used asymmetric systems requires the computational hardness of the integer factorization problem, the discrete logarithm or the elliptic curve discrete logarithm problem. However, there are still some cryptosystems that resist quantum computing. We will look at code-based cryptography in general and the McEliece cryptosystem specifically. Our goal is to understand the structure behind the McEliece scheme, including the encryption and decryption processes, and what some advantages and disadvantages are that the system has to offer. In addition, using the results from Courtois, Finiasz, and Sendrier's paper in 2001, we will discuss a digital signature scheme based on the McEliece cryptosystem. We analyze one classical algebraic attack against the security analysis of the system based on the distinguishing problem whether the public key of the McEliece scheme is generated from a generating matrix of a binary Goppa code or a random binary matrix. The idea of the attack involves solving an algebraic system of equations and we examine the dimension of the solution space of the linearized system of equations. With the assistance from a paper in 2010 by Faugere, Gauthier-Umana, Otmani, Perret, Tillich, we will see the parameters needed for the intractability of the distinguishing problem.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Pairing-friendly curves and elliptic curves with a trapdoor for the discrete
logarithm problem are versatile tools in the design of cryptographic protocols. We
show that curves having both properties enable a deterministic identity-based signing
with “short” signatures in the random oracle model. At PKC 2003, Choon and Cheon
proposed an identity-based signature scheme along with a provable security reduction.
We propose a modification of their scheme with several performance benefits. In
addition to faster signing, for batch signing the signature size can be reduced, and if
multiple signatures for the same identity need to be verified, the verification can be
accelerated. Neither the signing nor the verification algorithm rely on the availability
of a (pseudo)random generator, and we give a provable security reduction in the
random oracle model to the (`-)Strong Diffie-Hellman problem. Implementing the group arithmetic is a cost-critical task when designing quantum circuits for Shor’s algorithm to solve the discrete logarithm problem. We introduce a tool for the automatic generation of addition circuits for ordinary binary elliptic curves, a prominent platform group for digital signatures. Our Python software generates circuit descriptions that, without increasing the number of qubits or T-depth, involve less than 39% of the number of T-gates in the best previous construction. The software also optimizes the (CNOT) depth for F2-linear operations by means of suitable graph colorings.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Recently, multimedia applications and their use have grown dramatically in
popularity in strong part due to mobile device adoption by the consumer market.
Applications, such as video conferencing, have gained popularity. These applications
and others have a strong video component that uses the mobile device’s resources. These
resources include processing time, network bandwidth, memory use, and battery life.
The goal is to reduce the need of these resources by reducing the complexity of the
coding process. Mobile devices offer unique characteristics that can be exploited for
optimizing video codecs. The combination of small display size, video resolution, and
human vision factors, such as acuity, allow encoder optimizations that will not (or
minimally) impact subjective quality. The focus of this dissertation is optimizing video services in mobile environments. Industry has begun migrating from H.264 video coding to a more resource intensive but compression efficient High Efficiency Video Coding (HEVC). However, there has been no proper evaluation and optimization of HEVC for mobile environments.
Subjective quality evaluations were performed to assess relative quality between H.264
and HEVC. This will allow for better use of device resources and migration to new
codecs where it is most useful. Complexity of HEVC is a significant barrier to adoption
on mobile devices and complexity reduction methods are necessary. Optimal use of
encoding options is needed to maximize quality and compression while minimizing
encoding time. Methods for optimizing coding mode selection for HEVC were
developed. Complexity of HEVC encoding can be further reduced by exploiting the
mismatch between the resolution of the video, resolution of the mobile display, and the
ability of the human eyes to acquire and process video under these conditions. The
perceptual optimizations developed in this dissertation use the properties of spatial
(visual acuity) and temporal information processing (motion perception) to reduce the
complexity of HEVC encoding. A unique feature of the proposed methods is that they
reduce encoding complexity and encoding time.
The proposed HEVC encoder optimization methods reduced encoding time by
21.7% and bitrate by 13.4% with insignificant impact on subjective quality evaluations.
These methods can easily be implemented today within HEVC.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The main goal of video coding algorithms is to achieve high compression efficiency while
maintaining quality of the compressed signal at the highest level. Human visual system is
the ultimate receiver of compressed signal and final judge of its quality. This dissertation
presents work towards optimal video compression algorithm that is based on the
characteristics of our visual system. Modeling phenomena such as backward temporal
masking and motion masking we developed algorithms that are implemented in the state-of-
the-art video encoders. Result of using our algorithms is visually lossless compression
with improved efficiency, as verified by standard subjective quality and psychophysical
tests. Savings in bitrate compared to the High Efficiency Video Coding / H.265 reference
implementation are up to 45%.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The design of any communication receiver needs to addresses the issues of operating under the lowest possible signal-to-noise ratio. Among various algorithms that facilitate this objective are those used for iterative decoding of two-dimensional systematic convolutional codes in applications such as spread spectrum communications and Code Division Multiple Access (CDMA) detection. A main theme of any decoding schemes is to approach the Shannon limit in signal-to-noise ratio. All these decoding algorithms have various complexity levels and processing delay issues. Hence, the optimality depends on how they are used in the system. The technique used in various decoding algorithms is termed as iterative decoding. Iterative decoding was first developed as a practical means for decoding turbo codes. With the Log-Likelihood algebra, it is shown that a decoder can be developed that accepts soft inputs as a priori information and delivers soft outputs consisting of channel information, a posteriori information and extrinsic information to subsequent stages of iteration. Different algorithms such as Soft Output Viterbi Algorithm (SOVA), Maximum A Posteriori (MAP), and Log-MAP are compared and their complexities are analyzed in this thesis. A turbo decoder is implemented on the Digital Signal Processing (DSP) chip, TMS320C30 by Texas Instruments using a Modified-Log-MAP algorithm. For the Modified-Log-MAP-Algorithm, the optimal choice of the lookup table (LUT) is analyzed by experimenting with different LUT approximations. A low complexity decoder is proposed for a (7,5) code and implemented in the DSP chip. Performance of the decoder is verified under the Additive Wide Gaussian Noise (AWGN) environment. Hardware issues such as memory requirements and processing time are addressed for the chosen decoding scheme. Test results of the bit error rate (BER) performance are presented for a fixed number of frames and iterations.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The design of mobile communication receiver requires addressing the stringent issues of low signal-to-noise ratio (SNR) operation and low battery power consumption. Typically, forward error correction using convolutional coding with Viterbi decoding is employed to improve the error performance. However, even with moderate code lengths, the computation and storage requirement of conventional VD are substantial consuming appreciable fraction of DSP computations and hence battery power. The new error selective Viterbi decoding (ESVD) scheme developed recently (1) reduces the computational load substantially by taking advantage of the noise-free intervals to limit the trellis search. This thesis is concerned with the development of an efficient hardware architecture to implement a hard decision version of ESVD scheme for IS-54 coder. The implementations are optimized to reduce the computational complexity. The performance of the implemented ESVD scheme is verified for different channel conditions.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The objective of the work is to verify the feasibility of converting a large FEA code into a massively parallel FEA code in terms of computational speed and cost. Sequential subroutines in the Research EPIC hydro code, a Lagrangian finite element analysis code for high velocity elastic-plastic impact problems, are individually converted into parallel code using Cray Adaptive Fortran (CRAFT). The performance of massively parallel subroutines running on 32 PEs on Cray-T3D is faster than their sequential counterparts on Cray-YMP. At next stage of the research, Parallel Virtual Machine (PVM) directives is used to develop a PVM version of the EPIC hydro code by connecting the converted parallel subroutines running on multiple PEs of T3D to the sequential part of the code running on single PE. With an incremental increase in the massively parallel subroutines into the PVM EPIC hydro code, the performance with respect to speedup of the code increased accordingly. The results indicate that significant speedup can be achieved in the EPIC hydro code when most or all of the subroutines are massively parallelized.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis presents an image coding system using binomial QMF based subband decomposition and vector quantisation. An attempt was made to compress a still image of size 256 x 256 represented at a resolution of 8 bits/pixel to a bit rate of 0.5 bits/pixel using 16 channel subband decomposition with binomial QMFs and coding the subbands using a full search LBG Vector Quantizer (VQ). Simulations were done on SUN work station and the quality of the image was evaluated by computing the Signal to Noise Ratio (SNR) between the original image and the reconstructed image.
Model
Digital Document
Publisher
Florida Atlantic University
Description
A new approach has been developed for inverse kinematic analysis of redundant robots. In case of redundant robots inverse kinematics is complicated by the non-square nature of the Jacobian. In this method the Jacobian and inverse kinematic equation are reduced based on the rank of the Jacobian and the constraints specified. This process automatically locks some joints of the robot at various trajectory points. The reduced inverse kinematic equation is solved by an iterative procedure to find joint variable values for known task description. The results of computer simulation of the inverse kinematics applied on a redundant planar robot and a redundant moving base robot proved the method to be efficient, and the results can be found within a few iterations with excellent accuracy.