Software engineering

Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis presents the results of an empirical investigation of the applicability of genetic algorithms to a real-world problem in software reliability--the fault-prone module identification problem. The solution developed is an effective hybrid of genetic algorithms and neural networks. This approach (ENNs) was found to be superior, in terms of time, effort, and confidence in the optimality of results, to the common practice of searching manually for the best-performing net. Comparisons were made to discriminant analysis. On fault-prone, not-fault-prone, and overall classification, the lower error proportions for ENNs were found to be statistically significant. The robustness of ENNs follows from their superior performance over many data configurations. Given these encouraging results, it is suggested that ENNs have potential value in other software reliability problem domains, where genetic algorithms have been largely ignored. For future research, several plans are outlined for enhancing ENNs with respect to accuracy and applicability.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis involves original research in the area of semantic analysis of textual databases (content analysis). The main intention of this study is to examine how software engineering practices can benefit from the best manufacturing practices. There is a deliberate focus and emphasis on competitive effectiveness worldwide. The ultimate goal of the U.S. NAVY's Best Manufacturing Practices Program is to strengthen the U.S. industrial base and reduce the cost of defense systems by solving manufacturing problems and improving quality and reliability. Best manufacturing practices can assist software engineering practices in a way that when software companies use these practices they can: (1) Improve both software quality and staff productivity; (2) Determine the current status of the organization's software process; (3) Set goals for process improvement; (4) Create effective plans for reaching those goals; (5) Implement the major elements of the plans.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Software systems that control military radar systems must be highly reliable. A fault can compromise safety and security, and even cause death of military personnel. In this experiment we identify fault-prone software modules in a subsystem of a military radar system called the Joint Surveillance Target Attack Radar System, JSTARS. An earlier version was used in Operation Desert Storm to monitor ground movement. Product metrics were collected for different iterations of an operational prototype of the subsystem over a period of approximately three years. We used these metrics to train a decision tree model and to fit a discriminant model to classify each module as fault-prone or not fault-prone. The algorithm used to generate the decision tree model was TREEDISC, developed by the SAS Institute. The decision tree model is compared to the discriminant model.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Accurately classifying the quality of software is a major problem in any software development project. Software engineers develop models that provide early estimates of quality metrics which allow them to take actions against emerging quality problems. The use of a neural network as a tool to classify programs as a low, medium, or high risk for errors or change is explored using multiple software metrics as input. It is demonstrated that a neural network, trained using the back-propagation supervised learning strategy, produced the desired mapping between the static software metrics and the software quality classes. The neural network classification methodology is compared to the discriminant analysis classification methodology in this experiment. The comparison is based on two and three class predictive models developed using variables resulting from principal component analysis of software metrics.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Accurately predicting the quality of software is a major problem in any software development project. Software engineers develop models that provide early estimates of quality metrics which allow them to take action against emerging quality problems. Most often the predictive models are based upon multiple regression analysis which become unstable when certain data assumptions are not met. Since neural networks require no data assumptions, they are more appropriate for predicting software quality. This study proposes an improved neural network architecture that significantly outperforms multiple regression and other neural network attempts at modeling software quality. This is demonstrated by applying this approach to several large commercial software systems. After developing neural network models, we develop regression models on the same data. We find that the neural network models surpass the regression models in terms of predictive quality on the data sets considered.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Current multicore processors attempt to optimize consumer experience via task partitioning and concurrent execution of these (sub)tasks on the cores. Conversion of sequential code to parallel and concurrent code is neither easy, nor feasible with current methodologies. We have developed a mapping process that synergistically uses top-down and bottom-up methodologies. This process is amenable to automation. We use bottom-up analysis to determine decomposability and estimate computation and communication metrics. The outcome is a set of proposals for software decomposition. We then build abstract concurrent models that map these decomposed (abstract) software modules onto candidate multicore architectures; this resolves concurrency issues. We then perform a system level simulation to estimate concurrency gain and/or cost, and QOS (Qualify-of-Service) metrics. Different architectural combinations yield different QOS metrics; the requisite system architecture may then be chosen. We applied this 'middle-out' methodology to optimally map a digital camera application onto a processor with four cores.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Reliability and quality are desired features in industrial software applications. In some cases, they are absolutely essential. When faced with limited resources, software project managers will need to allocate such resources to the most fault prone areas. The ability to accurately classify a software module as fault-prone or not fault-prone enables the manager to make an informed resource allocation decision. An accurate quality classification avoids wasting resources on modules that are not fault-prone. It also avoids missing the opportunity to correct faults relatively early in the development cycle, when they are less costly. This thesis seeks to introduce the classification algorithms (classifiers) that are implemented in the WEKA software tool. WEKA (Waikato Environment for Knowledge Analysis) was developed at the University of Waikato in New Zealand. An empirical investigation is performed using a case study at a real-world system.
Model
Digital Document
Publisher
Florida Atlantic University
Description
To achieve high reliability in software-based systems, software metrics-based quality classification models have been explored in the literature. However, the collection of software metrics may be a hard and long process, and some metrics may not be helpful or may be harmful to the classification models, deteriorating the models' accuracies. Hence, methodologies have been developed to select the most significant metrics in order to build accurate and efficient classification models. Case-Based Reasoning is the classification technique used in this thesis. Since it does not provide any metric selection mechanisms, some metric selection techniques were studied. In the context of CBR, this thesis presents a comparative evaluation of metric selection methodologies, for raw and discretized data. Three attribute selection techniques have been studied: Kolmogorov-Smirnov Two-Sample Test, Kruskal-Wallis Test, and Information Gain. These techniques resulted in classification models that are useful for software quality improvement.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Managers of software development need to know which components of a system are fault-prone. If this can be determined early in the development cycle then resources can be more effectively allocated and significant costs can be reduced. Case-Based Reasoning (CBR) is a simple and efficient methodology for building software quality models that can provide early information to managers. Our research focuses on two case studies. The first study analyzes source files and classifies them as fault-prone or not fault-prone. It also predicts the number of faults in each file. The second study analyzes the fault removal process, and creates models that predict the outcome of software inspections.
Model
Digital Document
Publisher
Florida Atlantic University
Description
In software engineering software quality has become a topic of major concern. It has also been recognized that the role of maintenance organization is to understand and estimate the cost of maintenance releases of software systems. Planning the next release so as to maximize the increase in functionality and the improvement in quality are essential to successful maintenance management. With the growing collection of software in organizations this cost is becoming substantial. In this research we have compared two software quality models. We tried to see whether a model built on entire system which predicts subsystem and a model built on subsystem which predicts the same subsystem has similar, better or worst classification results. We used Classification And Regression Tree algorithm (CART) to build classification models. A case study is based on a very large telecommunication system.