Computer algorithms

Model
Digital Document
Publisher
Florida Atlantic University
Description
In ad hoc wireless networks, routing protocols are challenged with establishing and maintaining multihop routes in the face of mobility, bandwidth limitation and power constraints. Routing based on a connected dominating set is a promising approach, where the searching space for a router reduced to nodes in the set. A set is dominating if all the nodes in the system are either in the dominating set or adjacent to some nodes in the dominating set. In this thesis, we propose a method of calculating power-aware connected dominating set. Our simulation results show that the proposed approach outperforms several existing approaches in terms of life span of the network. We also discuss mobility management in dominating-set-based networks. Three operations are considered which are mobile host switch on, mobile host switch off and mobile host movement. We also discuss the use of dynamic source routing as an application of the connected dominating set.
Model
Digital Document
Publisher
Florida Atlantic University
Description
In this thesis two different types of computer algorithms, Deterministic and Monte Carlo, are illustrated. Implementations of the Berlekamp-Massey algorithm and the Parallelized Pollard Rho Search are described here. The questions of what these two algorithms provide to the field of cryptography and why they have proven themselves important to cryptography are briefly discussed. It is also shown that with a little extra knowledge, the Parallelized Pollard Rho Search may be easily modified to improve its performance.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis describes routing in mobile ad hoc wireless networks. Ad hoc networks lack wired backbone to maintain routes as mobile hosts move and power is on or off. Therefore, the hosts in ad hoc networks must cooperate with each other to determine routes in a distributed manner. Routing based on a Location is a frequently used approach, where the searching space for a route is reduced to smaller zone by defining request zone and expected zone. We propose a mobility pattern based algorithm to reduce the overhead, then evaluate the proposed algorithm through simulation. We have implemented two mobility patterns into Location Aided Routing, namely, leading movement and random walk type mobility patterns. We have developed simulation model for each mobility pattern, using SES/Workbench. The performance is measured in terms of overhead of the network. We also discuss various routing algorithms such as dynamic source routing, zone routing protocol, associativity based routing protocol and ad hoc on demand distance vector routing.
Model
Digital Document
Publisher
Florida Atlantic University
Description
A top-down design methodology using hardware description languages (HDL's) and powerful design, analysis, synthesis and layout software tools for electronic circuit design is described and applied to the design of a single layer artificial neural network that incorporates on-chip learning. Using the perception learning algorithm, these simple neurons learn a classification problem in 10.55 microseconds in one application. The objective is to describe a methodology by following the design of a simple network. This methodology is later applied in the design of a novel architecture, a stochastic neural network. All issues related to algorithmic design for VLSI implementability are discussed and results of layout and timing analysis given over software simulations. A top-down design methodology is presented, including a brief introduction to HDL's and an overview of the software tools used throughout the design process. These tools make it possible now for a designer to complete a design in a relative short period of time. In-depth knowledge of computer architecture, VLSI fabrication, electronic circuits and integrated circuit design is not fundamental to accomplish a task that a few years ago would have required a large team of specialized experts in many fields. This may appeal to researchers from a wide background of knowledge, including computer scientists, mathematicians, and psychologists experimenting with learning algorithms. It is only in a hardware implementation of artificial neural network learning algorithms that the true parallel nature of these architectures could be fully tested. Most of the applications of neural networks are basically software simulations of the algorithms run on a single CPU executing sequential simulations of a parallel, richly interconnected architecture. This dissertation describes a methodology whereby a researcher experimenting with a known or new learning algorithm will be able to test it as it was intentionally designed for, on a parallel hardware architecture.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The Alopex algorithm is presented as a universal learning algorithm for connectionist models. It is shown that the Alopex procedure could be used efficiently as a supervised learning algorithm for such models. The algorithm is demonstrated successfully on a variety of network architectures. Such architectures include multilayer perceptrons, time-delay models, asymmetric, fully recurrent networks and memory neuron networks. The learning performance as well as the generation capability of the Alopex algorithm are compared with those of the backpropagation procedure, concerning a number of benchmark problems, and it is shown that the Alopex has specific advantages over the backpropagation. Two new architectures (gain layer schemes) are proposed for the on-line, direct adaptive control of dynamical systems using neural networks. The proposed schemes are shown to provide better dynamic response and tracking characteristics, than the other existing direct control schemes. A velocity reference scheme is introduced to improve the dynamic response of on-line learning controllers. The proposed learning algorithm and architectures are studied on three practical problems; (i) Classification of handwritten digits using Fourier Descriptors; (ii) Recognition of underwater targets from sonar returns, considering temporal dependencies of consecutive returns and (iii) On-line learning control of autonomous underwater vehicles, starting with random initial conditions. Detailed studies are conducted on the learning control applications. Effect of the network learning rate on the tracking performance and dynamic response of the system are investigated. Also, the ability of the neural network controllers to adapt to slow and sudden varying parameter disturbances and measurement noise is studied in detail.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Accompanying the potential increase in power offered by parallel computers is an increase in the complexity of program design, implementation, testing and maintenance. It is important to understand the logical complexity of parallel programs in order to support the development of concurrent software. Measures are needed to quantify the components of parallel software complexity and to establish a basis for comparison and analysis of parallel algorithms at various stages of development and implementation. A set of primitive complexity measures is proposed that collectively describe the total complexity of parallel programs. The total complexity is separated into four dimensions or components: requirements, sequential, parallel and communication. Each proposed primitive measure is classified under one of these four areas. Two additional possible dimensions, fault-tolerance and real-time, are discussed. The total complexity measure is expressed as a vector of dimensions; each component is defined as a vector of primitive metrics. The method of quantifying each primitive metric is explained in detail. Those primitive metrics that contribute to the parallel and communications complexity are exercised against ten published summation algorithms and programs, illustrating that architecture has a significant effect on the complexity of parallel programs--even if the same programming language is used. The memory organization and the processor interconnection scheme had no effect on the parallel component, but did affect the communication component. Programming style and language did not have a noticeable effect on either component. The proposed metrics are quantifiable, consistent, and useful in comparing parallel algorithms. Unlike existing parallel metrics, they are general and applicable to different languages, architectures, algorithms, paradigms, programming styles and stages of software development.
Model
Digital Document
Publisher
Florida Atlantic University
Description
An un-supervised learning algorithm on application level intrusion detection, named Graph Sequence Learning Algorithm (GSLA), is proposed in this dissertation. Experiments prove its effectiveness. Similar to most intrusion detection algorithms, in GSLA, the normal profile needs to be learned first. The normal profile is built using a session learning method, which is combined with the one-way Analysis of Variance method (ANOVA) to determine the value of an anomaly threshold. In the proposed approach, a hash table is used to store a sparse data matrix in triple data format that is collected from a web transition log instead of an n-by-n dimension matrix. Furthermore, in GSLA, the sequence learning matrix can be dynamically changed according to a different volume of data sets. Therefore, this approach is more efficient, easy to manipulate, and saves memory space. To validate the effectiveness of the algorithm, extensive simulations have been conducted by applying the GSLA algorithm to the homework submission system at our computer science and engineering department. The performance of GSLA is evaluated and compared with traditional Markov Model (MM) and K-means algorithms. Specifically, three major experiments have been done: (1) A small data set is collected as a sample data, and is applied to GSLA, MM, and K-means algorithms to illustrate the operation of the proposed algorithm and demonstrate the detection of abnormal behaviors. (2) The Random Walk-Through sampling method is used to generate a larger sample data set, and the resultant anomaly score is classified into several clusters in order to visualize and demonstrate the normal and abnormal behaviors with K-means and GSLA algorithms. (3) Multiple professors' data sets are collected and used to build the normal profiles, and the ANOVA method is used to test the significant difference among professors' normal profiles. The GSLA algorithm can be made as a module and plugged into the IDS as an anomaly detection system.
Model
Digital Document
Publisher
Florida Atlantic University
Description
With the continuing advances in computing and wireless technologies, mobile ad hoc networks (MANETs) are expected to become an indispensable part of the computing environment in the near future. Wireless devices are constantly growing in computing speed, memory, communication capabilities and features, while shrinking in weight and size. With this growth and the proliferation of these devices in every aspect of society, the need for such devices to communicate in a seamless manner is becoming increasingly essential. Multiple routing protocols have been developed for MANETs [51]. As MANETs gain popularity, their need to support real time and multimedia applications is growing as well. Such applications have stringent quality of service (QoS) requirements such as bandwidth, delay, and delay fitter. Design and development of routing algorithms with QoS support is experiencing increased research interest. Several approaches which propose various routing algorithms with QoS support for MANETs have been presented in research. This dissertation addresses the issues and challenges of QoS routing in MANETS and presents three new protocols which provide QoS support for this environment. First, a brief classification of existing QoS routing algorithms is provided. Then, the three new protocols for QoS routing support in MANETs are presented. These protocols focus on resource reservation for QoS provisioning in TDMA-based MANETs. The first protocol improves QoS support by eliminating racing conditions during multiple reservations of QoS paths. The second protocol is a new protocol for resource reservation for QoS support in TDMA-based MANETs using directional antennas. The last protocol provides dynamic range resource reservation for QoS support in MANETs and represents an extension of the previous protocols.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Multi-hop wireless networks are infrastructure-less networks consisting of mobile or stationary wireless devices, which include multi-hop wireless mesh networks and multi-hop wireless sensor networks. These networks are characterized by limited bandwidth and energy resources, unreliable communication, and a lack of central control. These characteristics lead to the research challenges of multi-hop wireless networks. Building up routing schemes with good balance among the routing QoS (such as reliability, cost, and delay) is a paramount concern to achieve high performance wireless networks. These QoS metrics are internally correlated. Most existing works did not fully utilize this correlation. We design a metric to balance the trade-off between reliability and cost, and build up a framework of utility-based routing model in multi-hop wireless networks. This dissertation focuses on the variations with applications of utility-based routing models, designing new concepts, and developing new algorithms for them. A review of existing routing algorithms and the basic utility-based routing model for multi-hop wireless networks has been provided at the beginning. An efficient algorithm, called MaxUtility, has been proposed for the basic utility-based routing model. MaxUtility is an optimal algorithm that can find the best routing path with the maximum expected utility.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Disruption-Tolerant Networks (DTNs) are the networks comprised of a set of wireless nodes, and they experience unstable connectivity and frequent connection disruption because of the limitations of radio range, power, network density, device failure, and noise. DTNs are characterized by their lack of infrastructure, device limitation, and intermittent connectivity. Such characteristics make conventional wireless network routing protocols fail, as they are designed with the assumption the network stays connected. Thus, routing in DTNs becomes a challenging problem, due to the temporal scheduling element in a dynamic topology. One of the solutions is prediction-based, where nodes mobility is estimated with a history of observations. Then, the decision of forwarding messages during data delivery can be made with that predicted information. Current prediction-based routing protocols can be divided into two sub-categories in terms of that whether they are probability related: probabilistic and non-probabilistic. This dissertation focuses on the probabilistic prediction-based (PPB) routing schemes in DTNs. We find that most of these protocols are designed for a specified topology or scenario. So almost every protocol has some drawbacks when applied to a different scenario. Because every scenario has its own particular features, there could hardly exist a universal protocol which can suit all of the DTN scenarios. Based on the above motivation, we investigate and divide the current DTNs scenarios into three categories: Voronoi-based, landmark-based, and random moving DTNs. For each category, we design and implement a corresponding PPB routing protocol for either basic routing or a specified application with considering its unique features.