Software engineering

Model
Digital Document
Publisher
Florida Atlantic University
Description
The Agile methodologies have attracted the software development industry's attention due to their capability to overcome the limitations of the traditional software development approaches and to cope with increasing complexity in system development. Scrum is one of the Agile software development processes broadly adopted by industry. Scrum promotes frequent customer involvement and incremental short releases. Despite its popular use,
Scrum’s requirements engineering stage is inadequately defined which can lead to increase development time and cost, along with low quality or failure for the end products. This research shows the importance of activity planning of requirements engineering in improving the product quality, cost, and scheduling as well as it points out some drawbacks of Agile practices and available solutions. To improve the Scrum requirements engineering by overcoming its challenges in cases, such as providing a comprehensive understanding of the customer’s needs and addressing the effects of the challenges in other cases, such as frequent changes of requirements, the Design Thinking model is integrated into the Scrum framework in the context of requirements engineering management. The use of the Design Thinking model, in the context of requirements engineering management, is validated through an in-depth scientific study of the IBM Design Thinking framework. In addition, this research presents an Items Prioritization dEcision Support System (IPESS) which is a tool to assist the Product Owners for requirements prioritization. IPESS is built on information collected in the Design Thinking model. The IPESS tool adopts Analytic Hierarchy Process (AHP) technique and PageRank algorithm to deal with the specified factors and to achieve the optimal order for requirements items based on the prioritization score. IPESS is a flexible and comprehensive tool that focuses on different important aspects including customer satisfaction and product quality. The IPESS tool is validated through an experiment that was conducted in a real-world project
Model
Digital Document
Publisher
Florida Atlantic University
Description
Voice over Internet Protocol (VoIP) networks is becoming the most popular
telephony system in the world. However, studies of the security of VoIP networks are
still in their infancy. VoIP devices and networks are commonly attacked, and it is
therefore necessary to analyze the threats against the converged network and the
techniques that exist today to stop or mitigate these attacks. We also need to
understand what evidence can be obtained from the VoIP system after an attack has
occurred.
Many of these attacks occur in similar ways in different contexts or environments.
Generic solutions to these issues can be expressed as patterns. A pattern can be used
to guide the design or simulation of VoIP systems as an abstract solution to a problem
in this environment. Patterns have shown their value in developing good quality
software and we expect that their application to VoIP will also prove valuable to build
secure systems.
This dissertation presents a variety of patterns (architectural, attack, forensic and
security patterns). These patterns will help forensic analysts as well, as secure systems
developers because they provide a systematic approach to structure the required
information and help understand system weaknesses. The patterns will also allow us
to specify, analyze and implement network security investigations for different
architectures. The pattern system uses object-oriented modeling (Unified Modeling
Language) as a way to formalize the information and dynamics of attacks and
systems.
Model
Digital Document
Publisher
Florida Atlantic University
Description
In this dissertation we address two significant issues of concern. These are software
quality modeling and data quality assessment. Software quality can be measured by software
reliability. Reliability is often measured in terms of the time between system failures. A
failure is caused by a fault which is a defect in the executable software product. The time
between system failures depends both on the presence and the usage pattern of the software.
Finding faulty components in the development cycle of a software system can lead to a
more reliable final system and will reduce development and maintenance costs. The issue of
software quality is investigated by proposing a new approach, rule-based classification model
(RBCM) that uses rough set theory to generate decision rules to predict software quality.
The new model minimizes over-fitting by balancing the Type I and Type II niisclassiflcation
error rates. We also propose a model selection technique for rule-based models called rulebased
model selection (RBMS). The proposed rule-based model selection technique utilizes
the complete and partial matching rule sets of candidate RBCMs to determine the model
with the least amount of over-fitting. In the experiments that were performed, the RBCMs
were effective at identifying faulty software modules, and the RBMS technique was able to
identify RBCMs that minimized over-fitting. Good data quality is a critical component for building effective software quality models.
We address the significance of the quality of data on the classification performance of learners
by conducting a comprehensive comparative study. Several trends were observed in the
experiments. Class and attribute had the greatest impact on the performance of learners
when it occurred simultaneously in the data. Class noise had a significant impact on the
performance of learners, while attribute noise had no impact when it occurred in less than
40% of the most significant independent attributes. Random Forest (RF100), a group of 100
decision trees, was the most, accurate and robust learner in all the experiments with noisy
data.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Recently most of the research pertaining to Service-Oriented Architecture (SOA) is
based on web services and how secure they are in terms of efficiency and
effectiveness. This requires validation, verification, and evaluation of web services.
Verification and validation should be collaborative when web services from different
vendors are integrated together to carry out a coherent task. For this purpose, novel
model checking technologies have been devised and applied to web services. "Model
Checking" is a promising technique for verification and validation of software
systems. WS-BPEL (Business Process Execution Language for Web Services) is an
emerging standard language to describe web service composition behavior. The
advanced features of BPEL such as concurrency and hierarchy make it challenging to
verify BPEL models. Based on all such factors my thesis surveys a few important technologies (tools) for model checking and comparing each of them based on their
"functional" and "non-functional" properties. The comparison is based on three case
studies (first being the small case, second medium and the third one a large case)
where we construct synthetic web service compositions for each case (as there are not
many publicly available compositions [1]). The first case study is "Enhanced LoanApproval
Process" and is considered a small case. The second is "Enhanced Purchase
Order Process" which is of medium size and the third, and largest is based on a
scientific workflow pattern, called the "Service Oriented Architecture Implementing
BOINC Workflow" based on BOINC (Berkeley Open Infrastructure Network
Computing) architecture.
Model
Digital Document
Publisher
Florida Atlantic University
Description
As a compamon and complement to the work being done to build a secure systems
methodology, this thesis evaluates the use of Model-Driven Architecture (MDA) in
support of the methodology's lifecycle. The development lifecycle illustrated follows the
recommendations of this secure systems methodology, while using MDA models to
represent requirements, analysis, design, and implementation information. In order to
evaluate MDA, we analyze a well-understood distributed systems security problem,
remote access, as illustrated by the internet "secure shell" protocol, ssh. By observing the
ability of MDA models and transformations to specify remote access in each lifecycle
phase, MDA's strengths and weaknesses can be evaluated in this context. A further aim
of this work is to extract concepts that can be contained in an MDA security metamodel
for use in future projects.
Model
Digital Document
Publisher
Florida Atlantic University
Description
In the globalization software development environments, where the development
activities are distributed geographically and temporally, it is increasingly important
for the Computer Aided Software Engineering (CASE) tools to maintain the
information (both syntactic and semantic) captured in the design models. The Unified
Modeling Language (UML) is the de facto standard for modeling software
applications and UML diagrams serve as graphical documentations of the software
system. The interoperability of UML modeling tools is important in supporting the
models exchange, and further support design reuse. Tool interoperability is often
implemented using XML Metadata Interchange (XMI). Unfortunately, there is a loss
of fidelity of the design documentation when transforming between UML and XMI
due to the compatibility of different versions of UML, XMI and add-on proprietary
information, which hinder reuse. This thesis evaluates the interoperability of UML
modeling tools by assessing the quality of XMI documents representing the design.
Case studies in this thesis demonstrate a framework of preserving the fidelity of UML
model 's data when importing and exporting different UML models in a distributed
heterogeneous environment.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Providing high quality software products is the common goal of all software engineers. Finding faults early can produce large savings over the software life cycle. Therefore, software quality has become the main subject in our research field. This thesis presents a series of studies on a very large legacy telecommunication system. The system has significantly more than ten million lines of code written in a high-level language similar to Pascal. Software quality models were developed to predict the class of each module either as fault-prone or as not fault-prone. We used the SPRINT/SLIQ algorithm to build the classification tree models. We found out that SPRINT/ SLIQ as an improved CART algorithm can give us tree models with more accuracy, more balance, and less overfitting. We also found that software process metrics can significantly improve the predictive accuracy of software quality models.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Software quality is crucial both to software makers and customers. However, in reality, improvement of quality and reduction of costs are often at odds. Software modeling can help us to detect fault-prone software modules based on software metrics, so that we can focus our limited resources on fewer modules and lower the cost but still achieve high quality. In the present study, a tree classification modeling technique---TREEDISC was applied to three case studies. Several major contributions have been made. First, preprocessing of raw data was adopted to solve the computer memory problem and improve the models. Secondly, TREEDISC was thoroughly explored by examining the roles of important parameters in modeling. Thirdly, a generalized classification rule was introduced to balance misclassification rates and decrease type II error, which is considered more costly than type I error. Fourthly, certainty of classification was addressed. Fifthly, TREEDISC modeling was validated over multiple releases of software product.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Software quality models often have raw software metrics as the input data for predicting quality. Raw metrics are usually highly correlated with one another and thus may result in unstable models. Principal components analysis is a statistical method to improve model stability. This thesis presents a series of studies on a very large legacy telecommunication system. The system has significantly more than ten million lines of code written in a high level language similar to Pascal. Software quality models were developed to predict the class of each module either as fault-prone or as not fault-prone. We found out that the models based on principal components analysis were more robust than those based on raw metrics. We also found out that software process metrics can significantly improve the predictive accuracy of software quality models.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Current computer technologies and demands bring new challenges to the software engineering tools. This thesis includes a survey of software engineering environments, standards and technologies. It also examines the features needed to support rigorous object-oriented software development. The main contributions of the thesis are descriptions of innovative concepts and a high-level framework for a next-generation object-oriented software system development, management and maintenance environment, called IconSEE++, an Icon-based Software Engineering Environment.