Computer networks

Model
Digital Document
Publisher
Florida Atlantic University
Description
This research is aimed towards the concept of a new switching node architecture for cell-switched Asynchronous Transfer Mode (ATM) networks. The proposed architecture has several distinguishing features when compared with existing Banyan based switching node. It has a cylindrical structure as opposed to a flat structure as found in Banyans. The wrap around property results in better link utilization as compared with existing Banyans beside resulting in reduced average route length. Simplified digit controlled routing is maintained as found in Banyans. The cylindrical nature of the architecture, results in pipeline activity. Such architecture tends to sort the traffic to a higher address, eliminating the need of a preprocessing node as a front end processing node. Approximate Markov chain analyses for the performance of the switching node with single input buffers is presented. The analyses are used to compute the time delay distribution of a cell leaving the node. A simulation tool is used to validate the analytical model. The simulation model is free from the critical assumptions which are necessary to develop the analytical model. It is shown that the analytical results closely match with the simulation results. This confirms the authenticity of the simulation model. We then study the performance of the switching node for various input buffer sizes. Low throughput with single input buffered switching node is observed; however, as the buffer size is increased from two to three the increase in throughput is more than 100%. No appreciable increase in node delay is noted when the buffer size is increased from two to three. We conclude that the optimum buffer size for large throughput is three and the maximum throughput with offered load of 0.9 and buffer size three is 0.75. This is because of head of line blocking phenomenon. A technique to overcome such inherent problem is presented. Several delays which a cell faces are analyzed and summarized below. The wait delay with buffer sizes one and two is high. However, the wait delay is negligible when the buffer size is increased beyond two. This is because increasing the buffer size reduces the head of line blocking. Thus more cells can move forward. Node delay and switched delay are comparable when the buffer size is greater than two. The delay offered is within a threshold range as noted for real time traffic. The delay is clock rate dependent and can be minimized by running the switching node at a higher clock speed. The worst delay noted for a switched cell for a node operating at a clock rate of 200 Mhz is 0.5 usec.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This dissertation describes an architecture for a special purpose communications protocol processor (CPP) that has been developed for open systems interconnection (OSI) type layered protocol processing. There exists a performance problem with the implementation and processing of communication protocols and the problem can have an impact on the throughput of future network interfaces. This problem revolves around two issues, (i) communication processing bottlenecks to fully utilize high speed transmission mediums; (ii) mechanism used in the implementation of communications functions. It is the objective of this work to address this problem and develop a first of its kind processor that is dedicated to protocol processing. At first trends in computer communications technology are discussed along with issues that influence throughput in front end controllers for network interfaces that support OSI. Network interface requirements and a survey of existing technology are presented and the state of the art of layered communication is evaluated and specific parameters that contribute to the performance of communications processors are identified. Based on this evaluation a new set of instructions is developed to support the necessary functions. Each component of the new architecture is explained with respect to the mechanism for implementation. The CPP contains special-purpose circuits dedicated to quick performance (e.g. single machine cycle execution) of functions needed to process header and frame information, functions which are repeatedly encountered in all protocol layers, and instructions designed to take advantage of these circuits. The header processing functions include priority branch determination functions, register bit reshaping (rearranging) functions, and instruction address processing functions. Frame processing functions include CRC (cyclic redundancy check) computations, bit insertion/deletion operations and special character detection operations. Justifications for new techniques are provided and their advantages over existing technology are discussed. A hardware register transfer level model is developed to simulate the new architecture for path length computations. A performance queueing model is also developed to analyze the processor characteristics with various load parameters. Finally, a brief discussion indicates how such a processor would apply to future network interfaces along with possible trends.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Multi-hop wireless networks are infrastructure-less networks consisting of mobile or stationary wireless devices, which include multi-hop wireless mesh networks and multi-hop wireless sensor networks. These networks are characterized by limited bandwidth and energy resources, unreliable communication, and a lack of central control. These characteristics lead to the research challenges of multi-hop wireless networks. Building up routing schemes with good balance among the routing QoS (such as reliability, cost, and delay) is a paramount concern to achieve high performance wireless networks. These QoS metrics are internally correlated. Most existing works did not fully utilize this correlation. We design a metric to balance the trade-off between reliability and cost, and build up a framework of utility-based routing model in multi-hop wireless networks. This dissertation focuses on the variations with applications of utility-based routing models, designing new concepts, and developing new algorithms for them. A review of existing routing algorithms and the basic utility-based routing model for multi-hop wireless networks has been provided at the beginning. An efficient algorithm, called MaxUtility, has been proposed for the basic utility-based routing model. MaxUtility is an optimal algorithm that can find the best routing path with the maximum expected utility.
Model
Digital Document
Publisher
Florida Atlantic University
Description
In last few years there has been significant growth in the area of wireless communication. Quality of Service (QoS) has become an important consideration for supporting variety of applications that utilize the network resources. These applications include voice over IP, multimedia services, like, video streaming, video conferencing etc. IEEE 802.16/WiMAX is a new network which is designed with quality of service in mind. This thesis focuses on analysis of quality of service as implemented by the WiMAX networks. First, it presents the details of the quality of service architecture in WiMAX network. In the analysis, a WiMAX module developed based on popular network simulator ns-2, is used. Various real life scenarios like voice call, video streaming are setup in the simulation environment. Parameters that indicate quality of service, such as, throughput, packet loss, average jitter and average delay, are analyzed for different types of service flows as defined in WiMAX. Results indicate that better quality of service is achieved by using service flows designed for specific applications.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Disruption-Tolerant Networks (DTNs) are the networks comprised of a set of wireless nodes, and they experience unstable connectivity and frequent connection disruption because of the limitations of radio range, power, network density, device failure, and noise. DTNs are characterized by their lack of infrastructure, device limitation, and intermittent connectivity. Such characteristics make conventional wireless network routing protocols fail, as they are designed with the assumption the network stays connected. Thus, routing in DTNs becomes a challenging problem, due to the temporal scheduling element in a dynamic topology. One of the solutions is prediction-based, where nodes mobility is estimated with a history of observations. Then, the decision of forwarding messages during data delivery can be made with that predicted information. Current prediction-based routing protocols can be divided into two sub-categories in terms of that whether they are probability related: probabilistic and non-probabilistic. This dissertation focuses on the probabilistic prediction-based (PPB) routing schemes in DTNs. We find that most of these protocols are designed for a specified topology or scenario. So almost every protocol has some drawbacks when applied to a different scenario. Because every scenario has its own particular features, there could hardly exist a universal protocol which can suit all of the DTN scenarios. Based on the above motivation, we investigate and divide the current DTNs scenarios into three categories: Voronoi-based, landmark-based, and random moving DTNs. For each category, we design and implement a corresponding PPB routing protocol for either basic routing or a specified application with considering its unique features.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Cloud Computing is security. In complex systems such as Cloud Computing, parts of a system are secured by using specific products, but there is rarely a global security analysis of the complete system. We have described how to add security to cloud systems and evaluate its security levels using a reference architecture. A reference architecture provides a framework for relating threats to the structure of the system and makes their numeration more systematic and complete. In order to secure a cloud framework, we have enumerated cloud threats by combining several methods because it is not possible to prove that we have covered all the threats. We have done a systematic enumeration of cloud threats by first identifying them in the literature and then by analyzing the activities from each of their use cases in order to find possible threats. These threats are realized in the form of misuse cases in order to understand how an attack happens from the point of view of an attacker. The reference architecture is used as a framework to determine where to add security in order to stop or mitigate these threats. This approach also implies to develop some security patterns which will be added to the reference architecture to design a secure framework for clouds. We finally evaluate its security level by using misuse patterns and considering the threat coverage of the models.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The demand for virtual education is rapidly increasing due to the proliferation of legislation demanding class size limitations, funding cuts, and school choice across the United States. Virtual education leaders are discovering new ways to enhance and develop teachers to become more efficient and increase quality of learning online. Learning teams are one tool implemented by professional development departments in order to obtain a community of shared best practices and increase professional learning for teachers. ... The purpose of this exploratory case study was to investigate teachers' perceptions of the contribution of virtual learning teams to their professional development in a completely online K-12 environment. ... Five major themes emerged from the interviews, which were teacher professional development as it relates to student success, collaboration, balance, knowledge gained from being part of a virtual learning team, and teachers' perception of student success.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Consider a scenario where a server S shares a symmetric key kU with each user U. Building on a 2-party solution of Bohli et al., we describe an authenticated 3-party key establishment which remains secure if a computational Bilinear Diffie Hellman problem is hard or the server is uncorrupted. If the BDH assumption holds during a protocol execution, but is invalidated later, entity authentication and integrity of the protocol are still guaranteed. Key establishment protocols based on hardness assumptions, such as discrete logarithm problem (DLP) and integer factorization problem (IFP) are vulnerable to quantum computer attacks, whereas the protocols based on other hardness assumptions, such as conjugacy search problem and decomposition search problem can resist such attacks. The existing protocols based on the hardness assumptions which can resist quantum computer attacks are only passively secure. Compilers are used to convert a passively secure protocol to an actively secure protoc ol. Compilers involve some tools such as, signature scheme and a collision-resistant hash function. If there are only passively secure protocols but not a signature scheme based on same assumption then the application of existing compilers requires the use of such tools based on different assumptions. But the introduction of new tools, based on different assumptions, makes the new actively secure protocol rely on more than one hardness assumptions. We offer an approach to derive an actively secure two-party protocol from a passively secure two-party protocol without introducing further hardness assumptions. This serves as a useful formal tool to transform any basic algebric method of public key cryptography to the real world applicaticable cryptographic scheme. In a recent preprint, Vivek et al. propose a compiler to transform a passively secure 3-party key establishment to a passively secure group key establishment. To achieve active security, they apply this compiler to Joux's
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis consists of the development of a web based wireless sensor network (WSN) monitoring system using smartphones. Typical WSNs consist of networks of wireless sensor nodes dispersed over predetermined areas to acquire, process, and transmit data from these locations. Often it is the case that the WSNs are located in areas too hazardous or inaccessible to humans. We focused on the need for access to this sensed data remotely and present our reference architecture to solve this problem. We developed this architecture for web-based wireless sensor network monitoring and have implemented a prototype that uses Crossbow Mica sensors and Android smartphones for bridging the wireless sensor network with the web services for data storage and retrieval. Our application has the ability to retrieve sensed data directly from a wireless senor network composed of Mica sensors and from a smartphones onboard sensors. The data is displayed on the phone's screen, and then, via Internet connection, they are forwarded to a remote database for manipulation and storage. The attributes sensed and stored by our application are temperature, light, acceleration, GPS position, and geographical direction. Authorized personnel are able to retrieve and observe this data both textually and graphically from any browser with Internet connectivity or through a native Android application. Web-based wireless sensor network architectures using smartphones provides a scalable and expandable solution with applicability in many areas, such as healthcare, environmental monitoring, infrastructure health monitoring, border security, and others.
Model
Digital Document
Publisher
Florida Atlantic University
Description
IEEE 802.11 networks successfully satisfy high data demands and are cheaper compared to cellular networks. Modern mobile computers and phones are equipped with 802.11 and are VoIP capable. Current network designs do not dynamically accommodate changes in the usage. We propose a dynamic power control algorithm that provides greater capacity within a limited geographic region. Most other power algorithms necessitate changes in 802.11 requiring hardware changes. Proposed algorithm only requires firmware updates to enable dynamic control of APs transmit power. We use earlier studies to determine the limit of the number of users to optimize power. By lowering transmit power of APs with large number of users, we can effectively decrease the cell size. The resulting gap is then covered by dynamically activating additional APs. This also provides greater flexibility and reduces the network planning costs.