Computer software--Development

Model
Digital Document
Publisher
Florida Atlantic University
Description
In the globalization software development environments, where the development
activities are distributed geographically and temporally, it is increasingly important
for the Computer Aided Software Engineering (CASE) tools to maintain the
information (both syntactic and semantic) captured in the design models. The Unified
Modeling Language (UML) is the de facto standard for modeling software
applications and UML diagrams serve as graphical documentations of the software
system. The interoperability of UML modeling tools is important in supporting the
models exchange, and further support design reuse. Tool interoperability is often
implemented using XML Metadata Interchange (XMI). Unfortunately, there is a loss
of fidelity of the design documentation when transforming between UML and XMI
due to the compatibility of different versions of UML, XMI and add-on proprietary
information, which hinder reuse. This thesis evaluates the interoperability of UML
modeling tools by assessing the quality of XMI documents representing the design.
Case studies in this thesis demonstrate a framework of preserving the fidelity of UML
model 's data when importing and exporting different UML models in a distributed
heterogeneous environment.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Though software development has been evolving for over 50 years, the development of computer software systems has largely remained an art. Through the application of measurable and repeatable processes, efforts have been made to slowly transform the software development art into a rigorous engineering discipline. The potential gains are tremendous. Computer software pervades modern society in many forms. For example, the automobile, radio, television, telephone, refrigerator, and still-camera have all been transformed by the introduction of computer based controls. The quality of these everyday products is in part determined by the quality of the computer software running inside them. Therefore, the timely delivery of low-cost and high-quality software to enable these mass market products becomes very important to the long term success of the companies building them. It is not surprising that managing the number of faults in computer software to competitive levels is a prime focus of the software engineering activity. In support of this activity, many models of software quality have been developed to help control the software development process and ensure that our goals of cost and quality are met on time. In this study, we focus on the software quality modeling activity. We improve existing static and dynamic methodologies and demonstrate new ones in a coordinated attempt to provide engineering methods applicable to the development of computer software. We will show how the power of separate predictive and classification models of software quality may be combined into one model; introduce a three group fault classification model in the object-oriented paradigm; demonstrate a dynamic modeling methodology of the testing process and show how software product measures and software process measures may be incorporated as input to such a model; demonstrate a relationship between software product measures and the testability of software. The following methodologies were considered: principal components analysis, multiple regression analysis, Poisson regression analysis, discriminant analysis, time series analysis, and neural networks. Commercial grade software systems are used throughout this dissertation to demonstrate concepts and validate new ideas. As a result, we hope to incrementally advance the state of the software engineering "art".
Model
Digital Document
Publisher
Florida Atlantic University
Description
This thesis describes the development of the hardware-in-the-loop simulation for FAU Autonomous Underwater Vehicles. The development was based on the existing simulation platform. For more efficiency and flexibility, this simulation package was ported to Linux. The hardware-in-the-loop simulation enables developers to connect the vehicle directly to a remote simulator. This kind of simulation is used to test the actual software components embedded in the vehicle system. The simulation package was enhanced by the addition of a 3D viewer. This thesis describes the whole development process, from feasibility study and implementation to qualification phases. This viewer is platform independent and designed to be connected to the simulator. It renders the AUV moving in a virtual environment. This tool can be used during all development steps, from tuning phases to post-mission analysis.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Content analysis is used to investigate the essence of the Software Engineering Institute's Capability Maturity Model (CMM) through associated software process evaluation instruments. This study yields lexical maps of key terms from each questionnaire. The content analysis is studied in three possible ways for each of the questionnaires: By question, by key process area, and by maturity level. These maps are named suitably. Super network and distribution maps are used for finding relations among the maps. Analysis of the key terms from the maps are compared to extract the essence of CMM and the ability of the questionnaires to adequately assess an organization's process maturity.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Collecting software metrics manually could be a tedious, inaccurate, and subjective task. Two new tools were developed to automate this process in a rapid, accurate, and objective way. The first tool, the Metrics Analyzer, evaluates 19 metrics at the function level, from complete or partial systems written in C. The second tool, the Call Graph Generator, does not assess a metric directly, but generates a call graph based on a complete or partial system written in C. The call graph is used as an input to another tool (not considered here) that measures the coupling of a module, such as a function or a file. A case study analyzed the relationships among the metrics, including the coupling metric, using principal component analysis, which transformed the 19 metrics into eight principal components.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Software reuse has been looked upon in recent years as a promising mechanism for achieving increased levels of software quality and productivity within an organization. A form of software reuse which has been gaining in popularity is the use of design patterns. Design patterns are a higher level of abstraction than source code and are proving to be a valuable resource for both software developers and new hires within a company. This thesis develops the idea of applying design patterns to the Computer Aided Design (CAD) software development environment. The benefits and costs associated with implementing a software reuse strategy are explained and the reasoning for developing design patterns is given. Design patterns are then described in detail and a potential method for applying design patterns within the CAD environment is demonstrated through the development of a CAD design pattern catalog.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The project that was created for this thesis is a Case Based Reasoning application to be used in high level software design for Siemens' Telecommunications software. Currently, design engineers search for existing subtasks in the software that are similar to subtasks in their new designs by reading documentation and consulting with other engineers. The prototype for Software Design Using Case Based Reasoning (SDUCBR) stores these subtasks in a case library and enables the design engineer to locate relevant subtasks via three different indexing techniques. This thesis addresses knowledge representation and indexing mechanisms appropriate for this application. SDUCBR is domain-dependent. Cases are stored in a relational hierarchy to facilitate analyzing the existing implementation from various perspectives. The indexing mechanisms were designed to provide the software design engineer with the flexibility of describing a problem differently based on the objective, level of granularity, and special characteristics of the subtask.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The use of formal methods has become increasingly important for software development. In this thesis, we present the formal specifications for a method-based authorization model for object-oriented databases. We also formalize a proposed user group structuring. We start from an existing OMT (Object Modeling Technique) description and we use Z as language for formal specification. This specification gives a precise definition of the policies and functions of this authorization system. This can be used as a basis for implementation and possible verification in those cases where a high level of security is required.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Accurately predicting the quality of software is a major problem in any software development project. Software engineers develop models that provide early estimates of quality metrics which allow them to take action against emerging quality problems. Most often the predictive models are based upon multiple regression analysis which become unstable when certain data assumptions are not met. Since neural networks require no data assumptions, they are more appropriate for predicting software quality. This study proposes an improved neural network architecture that significantly outperforms multiple regression and other neural network attempts at modeling software quality. This is demonstrated by applying this approach to several large commercial software systems. After developing neural network models, we develop regression models on the same data. We find that the neural network models surpass the regression models in terms of predictive quality on the data sets considered.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Since maintenance is the most expensive phase of the software life cycle, detecting most of the errors as early as possible in the software development effort can provide substantial savings. This study investigates the behavior of complexity metrics during testing and maintenance, and their relationship to modifications made to the software. Interface complexity causes most of the change activities during integration testing and maintenance, while size causes most of the changes during unit testing. Principal component analysis groups 16 complexity metrics into four domains. Changes in domain pattern are observed throughout the software life cycle. Using those domains as input, regression analysis shows that software complexity measures collected as early as the unit testing phase can identify and predict change prone modules. With a low rate of misclassification, discriminant analysis further confirms that complexity metrics provide a strong indication of the changes made to a module during testing and maintenance.