Computer software--Reliability

Model
Digital Document
Publisher
Florida Atlantic University
Description
In this dissertation we address two significant issues of concern. These are software
quality modeling and data quality assessment. Software quality can be measured by software
reliability. Reliability is often measured in terms of the time between system failures. A
failure is caused by a fault which is a defect in the executable software product. The time
between system failures depends both on the presence and the usage pattern of the software.
Finding faulty components in the development cycle of a software system can lead to a
more reliable final system and will reduce development and maintenance costs. The issue of
software quality is investigated by proposing a new approach, rule-based classification model
(RBCM) that uses rough set theory to generate decision rules to predict software quality.
The new model minimizes over-fitting by balancing the Type I and Type II niisclassiflcation
error rates. We also propose a model selection technique for rule-based models called rulebased
model selection (RBMS). The proposed rule-based model selection technique utilizes
the complete and partial matching rule sets of candidate RBCMs to determine the model
with the least amount of over-fitting. In the experiments that were performed, the RBCMs
were effective at identifying faulty software modules, and the RBMS technique was able to
identify RBCMs that minimized over-fitting. Good data quality is a critical component for building effective software quality models.
We address the significance of the quality of data on the classification performance of learners
by conducting a comprehensive comparative study. Several trends were observed in the
experiments. Class and attribute had the greatest impact on the performance of learners
when it occurred simultaneously in the data. Class noise had a significant impact on the
performance of learners, while attribute noise had no impact when it occurred in less than
40% of the most significant independent attributes. Random Forest (RF100), a group of 100
decision trees, was the most, accurate and robust learner in all the experiments with noisy
data.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Fault tolerant programming methods improve software reliability using the principles of design diversity and redundancy. Design diversity and redundancy, on the other hand, escalate the cost of the software design and development. In this thesis, we study the reliability of hybrid fault tolerant systems. Probability models based on fault trees are developed for the recovery block (RB), N-version programming (NVP) and hybrid schemes which are the combinations of RB and NVP. Two heuristic methods are developed to construct hybrid fault tolerant systems with total cost constraints. The algorithms provide a systematic approach to the design of hybrid fault tolerant systems.
Model
Digital Document
Publisher
Florida Atlantic University
Description
With the increase in the applications of computer technology, there are more and more demands for the use of computer systems in the area of real-time applications and critical systems. Reliability and performance are fundamental design requirements for these applications. In this dissertation, we develop some specific aspects of a fault-tolerant decentralized system architecture. This system can execute concurrent processes and it is composed of processing elements that have only local memories with point-to-point communication. A model using hierarchical layers describes this system. Fault tolerance techniques are discussed for the applications, software, operating system, and hardware layers of the model. Scheduling of communicating tasks to increase performance is also addressed. Some special problems such as the Byzantine Generals problem are considered. We have shown that, by combining reliable techniques on different layers and with consideration of system performance, one can provide a system with a very high level reliability as well as performance.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Software systems that control military radar systems must be highly reliable. A fault can compromise safety and security, and even cause death of military personnel. In this experiment we identify fault-prone software modules in a subsystem of a military radar system called the Joint Surveillance Target Attack Radar System, JSTARS. An earlier version was used in Operation Desert Storm to monitor ground movement. Product metrics were collected for different iterations of an operational prototype of the subsystem over a period of approximately three years. We used these metrics to train a decision tree model and to fit a discriminant model to classify each module as fault-prone or not fault-prone. The algorithm used to generate the decision tree model was TREEDISC, developed by the SAS Institute. The decision tree model is compared to the discriminant model.
Model
Digital Document
Publisher
Florida Atlantic University
Description
One of the important problems which software engineers face is how to determine which software reliability model should be used for a particular system. Some recent attempts to compare different models used complementary graphical and analytical techniques. These techniques require an excessive amount of time for plotting the data and running the analyses, and they are still rather subjective as to which model is best. So another technique needs to be found that is simpler and yet yields a less subjective measure of goodness of fit. The Akaike Information Criterion (AIC) is proposed as a new approach for selecting the best model. The performance of AIC is measured by Monte-Carlo simulation and by comparison to published data sets. The AIC chooses the correct model 95% of the time.
Model
Digital Document
Publisher
Florida Atlantic University
Description
We have developed reliability models for a variety of fault-tolerant software constructs including those based on two well-known methodologies: recovery block and N-version programming, and their variations. We also developed models for the conversation scheme which provides fault tolerance for concurrent software and a newly proposed system architecture, the recovery metaprogram, which attempts to unify most of the existing fault-tolerant strategies. Each model is evaluated using either GSPN, a software package based on Generalized Stochastic Petri Nets, or Sharpe, an evaluation tool for Markov models. The numerical results are then analyzed and compared. Major results derived from this process include the identification of critical parameters for each model, the comparisons of relative performance among different software constructs, the justification of a preliminary approach to the modeling of complex conversations, and the justification of recovery metaprogram regarding improvement of reliability.