Algorithms

Model
Digital Document
Publisher
Florida Atlantic University
Description
A study was made of the problem of locating M facilities
on a connected grid graph, so that M is the minimum and so
that every demand node on the graph is within given distance
K of one of these M facilities. We call this problem briefly
the G(N,K,M) problem, with N denoting the total number
of demand nodes. An algorithm for solving this problem by using backtrack
technique is presented in this thesis. A heuristic algorithm
is also present; although the resulting M is not always minimum,
it tends to be near minimum. The advantage over the
backtrack algorithm is that the heuristic algorithm operates
very quickly. Algorithms represented in this thesis are programmed in
the Pascal language for the Univac 1100 computer at Florida
Atlantic University, Boca Raton, Florida.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Finding the shortest or a "short enough" vector in an integral lattice of substantial dimension is a difficult problem. The problem is not known to be but most people believe it is [7]. The security of the newly proposed NTRU cryptosystem depends solely on this fact. However, by the definition NTRU lattices possess a certain symmetry. This suggests that there may be a way of taking advantage of this symmetry to enable a new cryptanalytical approach in combination with existing good lattice reduction algorithms. The aim of this work is to exploit the symmetry inherent in NTRU lattices to design a non-deterministic algorithm for improving basis reduction techniques for NTRU lattices. We show how the non-trivial cyclic automorphism of an NTRU lattice enables further reduction. Our approach combines the recently published versions of the famous LLL algorithm for lattice basis reduction with our automorphism utilization techniques.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The research proposed and elaborated in this dissertation is concerned with the development of new decision algorithms for hard handoff strategies in mobile communication systems. Specifically, the research tasks envisaged include the following: (1) Use of information-theoretics based statistical distance measures as a metric for hard handoff decisions; (2) A study to evaluate the log-likelihood criterion towards decision considerations to perform the hard handoff; (3) Development of a statistical model to evaluate optimum instants of measurements of the metric used for hard handoff decision. The aforesaid objectives refer to a practical scenario in which a mobile station (MS) traveling away from a serving base station (BS-I) may suffer communications impairment due to interference and shadowing affects, especially in an urban environment. As a result, it will seek to switch over to another base station (BS-II) that facilitates a stronger signal level. This is called handoff procedure. (The hard handoff refers to the specific case in which only one base station serves the mobile at the instant of handover). Classically, the handoff decision is done on the basis of the difference between received signal strengths (RSS) from BS-I and BS-II. The algorithms developed here, in contrast, stipulate the decision criterion set by the statistical divergence and/or log-likelihood ratio that exists between the received signals. The purpose of the present study is to evaluate the relative efficacy of the conventional and proposed algorithms in reference to: (i) Minimization of unnecessary handoffs ("ping-pongs"); (ii) Minimization of delay in handing over; (iii) Ease of implementation and (iv) Minimization of possible call dropouts due to ineffective handover envisaged. Simulated results with data commensurate with practical considerations are furnished and discussed. Background literature is presented in the introductory chapter and scope for future work is identified via open questions in the concluding chapter.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The development of a parallel data structure and an associated elemental decomposition algorithm for explicit finite element analysis for massively parallel SIMD computer, the DECmpp 12000 (MasPar MP-1) machine, is presented, and then extended to implementation on the MIMD computer, Cray-T3D. The new parallel data structure and elemental decomposition algorithm are discussed in detail and is used to parallelize a sequential Fortran code that deals with the application of isoparametric elements for the nonlinear dynamic analysis of shells of revolution. The parallel algorithm required the development of a new procedure, called an 'exchange', which consists of an exchange of nodal forces at each time step to replace the standard gather-assembly operations in sequential code. In addition, the data was reconfigured so that all nodal variables associated with an element are stored in a processor along with other element data. The architectural and Fortran programming language features of the MasPar MP-1 and Cray-T3D computers which are pertinent to finite element computations are also summarized, and sample code segments are provided to illustrate programming in a data parallel environment. The governing equations, the finite element discretization and a comparison between their implementation on Von Neumann and SIMD-MIMD parallel computers are discussed to demonstrate their applicability and the important differences in the new algorithm. Various large scale transient problems are solved using the parallel data structure and elemental decomposition algorithm and measured performances are presented and analyzed in detail. Results show that Cray-T3D is a very promising parallel computer for finite element computation. The 32 processors of this machine shows an overall speedup of 27-28, i.e. an efficiency of 85% or more and 128 processors shows a speedup of 70-77, i.e. an efficiency of 55% or more. The Cray-T3D results demonstrated that this machine is capable of outperforming the Cray-YMP by a factor of about 10 for finite element problems with 4K elements, therefore, the method of developing the parallel data structure and its associated elemental decomposition algorithm is recommended for implementation on other finite element code in this machine. However, the results from MasPar MP-1 show that this new algorithm for explicit finite element computations do not produce very efficient parallel code on this computer and therefore, the new data structure is not recommended for further use on this MasPar machine.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The dual issues of extracting and tracking eye features from video images are addressed in this dissertation. The proposed scheme is different from conventional intrusive eye movement measuring system and can be implemented using an inexpensive personal computer. The desirable features of such a measurement system are low cost, accuracy, automated operation, and non-intrusiveness. An overall scheme is presented for which a new algorithm is forwarded for each of the function blocks in the processing system. A new corner detection algorithm is presented in which the problem of detecting corners is solved by minimizing a cost function. Each cost factor captures a desirable characteristic of the corner using both the gray level information and the geometrical structure of a corner. This approach additionally provides corner orientations and angles along with corner locations. The advantage of the new approach over the existing corner detectors is that it is able to improve the reliability of detection and localization by imposing criteria related to both the gray level data and the corner structure. The extraction of eye features is performed by using an improved method of deformable templates which are geometrically arranged to resemble the expected shape of the eye. The overall energy function is redefined to simplify the minimization process. The weights for the energy terms are selected based on the normalized value of the energy term. Thus the weighting schedule of the modified method does not demand any expert knowledge for the user. Rather than using a sequential procedure, all parameters of the template are changed simultaneously during the minimization process. This reduces not only the processing time but also the probability of the template being trapped in local minima. An efficient algorithm for real-time eye feature tracking from a sequence of eye images is developed in the dissertation. Based on a geometrical model which describes the characteristics of the eye, the measurement equations are formulated to relate suitably selected measurements to the tracking parameters. A discrete Kalman filter is then constructed for the recursive estimation of the eye features, while taking into account the measurement noise. The small processing time allows this tracking algorithm to be used in real-time applications. This tracking algorithm is suitable for an automated, non-intrusive and inexpensive system as the algorithm is capable of measuring the time profiles of the eye movements. The issue of compensating head movements during the tracking of eye movements is also discussed. An appropriate measurement model was established to describe the effects of head movements. Based on this model, a Kalman filter structure was formulated to carry out the compensation. The whole tracking scheme which cascades two Kalman filters is constructed to track the iris movement, while compensating the head movement. The presence of the eye blink is also taken into account and its detection is incorporated into the cascaded tracking scheme. The above algorithms have been integrated to design an automated, non-intrusive and inexpensive system which provides accurate time profile of eye movements tracking from video image frames.
Model
Digital Document
Publisher
Florida Atlantic University
Description
A pressure-based computer program for a general Navier-Stokes equations has been developed. Body-fitted coordinate system is employed to handle flows with complex geometry. Non-staggered grid is used while the pressure oscillation is eliminated by a special pressure interpolation scheme. The hybrid algorithm is adopted to discretize the equations and the finite-difference equations are solved by TDMA, while the whole solution is obtained through an under-relaxed iterative process. The pressure field is evaluated using the compressible from of the SIMPLE algorithm.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The placement problem is an important part in the design process of VLSI chips. It is necessary to have a proper placement so that all connections between modules in a chip can be routed in a minimum area without violating any physical or electrical constraints. Current algorithms either do not give optimum solutions, are computationally slow, or are difficult to parallelize. PIREN(copyright) is a parallel implementation of a force directed algorithm which seeks to overcome the large amount of computer time associated with solving the placement problem. Each active processor in the massively parallel SIMD machine, the MasPar MP-2.2, can perform in parallel the computation necessary to place cells in an optimum location relative to one another based upon the connectivity between cells. This is due to a salient feature of the serial algorithm which allows multiple permutations to be made simultaneously on all modules in order to minimize the objective function. The serial implementation of PIREN(copyright) compares favorably in both run time and layout quality to the simulated annealing based algorithm, TimberWolf3.2$\sp\copyright$. The parallel implementation on the MP-2.2 has a speedup of 4.5 to 58.0 over the serial version of PIREN$\sp\copyright$ running of the VAX 6320, while producing layouts for several MCNC benchmarks which are of the same quality as those produced by the serial implementation.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The research work described in this dissertation is primarily aimed at developing efficient algorithms for the rate allocation problem in redundant serial chain manipulators. While the problem of redundancy resolution in the context of robot manipulators, had been a well researched one, search for optimality in computational efficiency has caught the attention only recently. Further, the idea of modifying the already developed performance criteria to improve computational efficiency, had rarely been treated with the importance it deserves. The present work in fact, provides many alternative formulations to the existing performance criteria. As a result of the present investigation, we developed a mathematical tool for the minimum norm solution for underdetermined systems of linear equations, using the orthogonal null space. Closed form equations were provided for cases with two or three degrees of redundancy. Detailed study of computational efficiency showed substantial reduction in the arithmetic operations necessary for such a solution. The above concept was later generalized to utilize the self motion characteristics of redundant manipulators, to provide alternate solutions. The duality concept between the Jacobian and the null space, established in this work, enabled the authors to develop a highly efficient formulation as an alternative to the commonly used pseudoinverse-based solution. In addition, by providing the example of a 7R anthropomorphic arm, the feasibility of obtaining analytical formulation of null space coefficient matrix and the transformed end effector velocity vector for any geometry has been demonstrated. By utilizing the duality between the Jacobian and its null space, different performance criteria commonly used in the redundancy resolution problem have been modified, increasing the computational efficiency. Various simulations performed as part of the present work, utilizing the analytical null space coefficient matrix and the transformed end effector velocity vector for 3R planar case and 7R spatial anthropomorphic arm corroborates the theories. Another practical application has been demonstrated by the example of a Titan 7F arm mounted on a mobile base. The work is consolidated by reiterating the insight obtained to the physical aspects of the redundancy resolution problem and providing a direction for future work. Suggestions are given for extending the work for high d.o.r. systems, with relevant mathematical foundations. Future work in the area of dynamic modelling, is delineated which also includes an example of modified dynamic manipulability measure.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The dissertation focuses on robot manipulator dynamic modeling, and inertial
and kinematic parameters identification problem. An automatic dynamic parameters
derivation symbolic algorithm is presented. This algorithm provides the linearly
independent dynamic parameters set. It is shown that all the dynamic parameters are
identifiable when the trajectory is persistently exciting. The parameters set satisfies
the necessary condition of finding a persistently exciting trajectory. Since in practice the system data matrix is corrupted with noise, conventional
estimation methods do not converge to the true values. An error bound is given for
Kalman filters. Total least squares method is introduced to obtain unbiased
estimates.
Simulations studies are presented for five particular identification methods.
The simulations are performed under different noise levels.
Observability problems for the inertial and kinematic parameters are
investigated. U%wer certain conditions all L%wearly Independent Parameters
derived from are observable.
The inertial and kinematic parameters can be categorized into three parts
according to their influences on the system dynamics. The dissertation gives an
algorithm to classify these parameters.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The broadcast operation has a most fundamental role in mobile ad hoc networks because of the broadcasting nature of radio transmission, i.e., when a sender transmits a packet, all nodes within the sender's transmission range will be affected by this transmission. The benefit of this property is that one packet can be received by all neighbors while the negative effect is that it interferes with other transmissions. Flooding ensures that the entire network receives the packet but generates many redundant transmissions which may trigger a serious broadcast storm problem that may collapse the entire network. The broadcast storm problem can be avoided by providing efficient broadcast algorithms that aim to reduce the number of nodes that retransmit the broadcast packet while still guaranteeing that all nodes receive the packet. This dissertation focuses on providing several efficient localized broadcast algorithms to reduce the broadcast redundancy in mobile ad hoc networks. In my dissertation, the efficiency of a broadcast algorithm is measured by the number of forward nodes for relaying a broadcast packet. A classification of broadcast algorithms for mobile ad hoc networks has been provided at the beginning. Two neighbor-designating broadcast algorithms, called total dominant pruning and partial dominant pruning, have been proposed to reduce the number of the forward nodes. Several extensions based on the neighbor-designating approach have also been investigated. The cluster-based broadcast algorithm shows good performance in dense networks, and it also provides a constant upper bound approximation ratio to the optimum solution for the number of forward nodes in the worst case. A generic broadcast framework with K hop neighbor information has a trade-off between the number of the forward nodes and the size of the K-hop zone. A reliable broadcast algorithm, called double-covered broadcast, is proposed to improve the delivery ratio of a broadcast package when the transmission error rate of the network is high. The effectiveness of all these algorithms has been confirmed by simulations.