Expert systems (Computer science)

Model
Digital Document
Publisher
Florida Atlantic University
Description
Recently most of the research pertaining to Service-Oriented Architecture (SOA) is
based on web services and how secure they are in terms of efficiency and
effectiveness. This requires validation, verification, and evaluation of web services.
Verification and validation should be collaborative when web services from different
vendors are integrated together to carry out a coherent task. For this purpose, novel
model checking technologies have been devised and applied to web services. "Model
Checking" is a promising technique for verification and validation of software
systems. WS-BPEL (Business Process Execution Language for Web Services) is an
emerging standard language to describe web service composition behavior. The
advanced features of BPEL such as concurrency and hierarchy make it challenging to
verify BPEL models. Based on all such factors my thesis surveys a few important technologies (tools) for model checking and comparing each of them based on their
"functional" and "non-functional" properties. The comparison is based on three case
studies (first being the small case, second medium and the third one a large case)
where we construct synthetic web service compositions for each case (as there are not
many publicly available compositions [1]). The first case study is "Enhanced LoanApproval
Process" and is considered a small case. The second is "Enhanced Purchase
Order Process" which is of medium size and the third, and largest is based on a
scientific workflow pattern, called the "Service Oriented Architecture Implementing
BOINC Workflow" based on BOINC (Berkeley Open Infrastructure Network
Computing) architecture.
Model
Digital Document
Publisher
Florida Atlantic University
Description
As a compamon and complement to the work being done to build a secure systems
methodology, this thesis evaluates the use of Model-Driven Architecture (MDA) in
support of the methodology's lifecycle. The development lifecycle illustrated follows the
recommendations of this secure systems methodology, while using MDA models to
represent requirements, analysis, design, and implementation information. In order to
evaluate MDA, we analyze a well-understood distributed systems security problem,
remote access, as illustrated by the internet "secure shell" protocol, ssh. By observing the
ability of MDA models and transformations to specify remote access in each lifecycle
phase, MDA's strengths and weaknesses can be evaluated in this context. A further aim
of this work is to extract concepts that can be contained in an MDA security metamodel
for use in future projects.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This research proposes a cluster-based target tracking strategy for one
moving object using wireless sensor networks. The sensor field is organized in 3
hierarchal levels. 1-bit message is sent when a node detects the target.
Otherwise the node stays silent. Since in wireless sensor network nodes have
limited computational resources, limited storage resources, and limited battery,
the code for predicting the target position should be simple, and fast to execute.
The algorithm proposed in this research is simple, fast, and utilizes all available
detection data for estimating the location of the target while conserving energy.
lbis has the potential of increasing the network life time.
A simulation program is developed to study the impact of the field size
and density on the overall performance of the strategy. Simulation results show
that the strategy saves energy while estimating the location of the target with an
acceptable error margin.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Scene understanding attempts to produce a textual description of visible and
latent concepts in an image to describe the real meaning of the scene. Concepts are
either objects, events or relations depicted in an image. To recognize concepts, the
decision of object detection algorithm must be further enhanced from visual
similarity to semantical compatibility. Semantically relevant concepts convey the
most consistent meaning of the scene.
Object detectors analyze visual properties (e.g., pixel intensities, texture, color
gradient) of sub-regions of an image to identify objects. The initially assigned
objects names must be further examined to ensure they are compatible with each
other and the scene. By enforcing inter-object dependencies (e.g., co-occurrence,
spatial and semantical priors) and object to scene constraints as background
information, a concept classifier predicts the most semantically consistent set of
names for discovered objects. The additional background information that describes
concepts is called context.
In this dissertation, a framework for building context-based concept detection is
presented that uses a combination of multiple contextual relationships to refine the
result of underlying feature-based object detectors to produce most semantically compatible concepts.
In addition to the lack of ability to capture semantical dependencies, object
detectors suffer from high dimensionality of feature space that impairs them.
Variances in the image (i.e., quality, pose, articulation, illumination, and occlusion)
can also result in low-quality visual features that impact the accuracy of detected
concepts.
The object detectors used to build context-based framework experiments in this
study are based on the state-of-the-art generative and discriminative graphical
models. The relationships between model variables can be easily described using
graphical models and the dependencies and precisely characterized using these
representations. The generative context-based implementations are extensions of
Latent Dirichlet Allocation, a leading topic modeling approach that is very
effective in reduction of the dimensionality of the data. The discriminative contextbased
approach extends Conditional Random Fields which allows efficient and
precise construction of model by specifying and including only cases that are
related and influence it.
The dataset used for training and evaluation is MIT SUN397. The result of the
experiments shows overall 15% increase in accuracy in annotation and 31%
improvement in semantical saliency of the annotated concepts.
Model
Digital Document
Publisher
Florida Atlantic University
Description
For an 8-bit grayscale image patch of size n x n, the number of distinguishable
signals is 256(n2). Natural images (e.g.,photographs of a natural scene) comprise a
very small subset of these possible signals. Traditional image and video processing
relies on band-limited or low-pass signal models. In contrast, we will explore the
observation that most signals of interest are sparse, i.e. in a particular basis most
of the expansion coefficients will be zero. Recent developments in sparse modeling
and L1 optimization have allowed for extraordinary applications such as the single
pixel camera, as well as computer vision systems that can exceed human performance.
Here we present a novel neural network architecture combining a sparse filter model
and locally competitive algorithms (LCAs), and demonstrate the networks ability to
classify human actions from video. Sparse filtering is an unsupervised feature learning
algorithm designed to optimize the sparsity of the feature distribution directly without
having the need to model the data distribution. LCAs are defined by a system of
di↵erential equations where the initial conditions define an optimization problem and the dynamics converge to a sparse decomposition of the input vector. We applied
this architecture to train a classifier on categories of motion in human action videos.
Inputs to the network were small 3D patches taken from frame di↵erences in the
videos. Dictionaries were derived for each action class and then activation levels for
each dictionary were assessed during reconstruction of a novel test patch. We discuss
how this sparse modeling approach provides a natural framework for multi-sensory
and multimodal data processing including RGB video, RGBD video, hyper-spectral
video, and stereo audio/video streams.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This work presents the implementation of the the IEEE 1609.2 WAVE Security
Services Standard. This implementation provides the ability to generate a message
signature, along with the capability to verify that signature for wave short messages
transmitted over an unsecured medium. Only the original sender of the message can sign
it, allowing for the authentication of a message to be checked. As hashing is used during
the generation and verification of signatures, message integrity can be verified because a
failed signature verification is a result of a compromised message. Also provided is the
ability to encrypt and decrypt messages using AES-CCM to ensure that sensitive
information remains safe and secure from unwanted recipients. Additionally this
implementation provides a way for the 1609.2 specific data types to be encoded and
decoded for ease of message transmittance. This implementation was built to support the
Smart Drive initiative’s VANET testbed, supported by the National Science Foundation
and is intended to run on the Vehicular Multi-technology Communication Device
(VMCD) that is being developed. The VMCD runs on the embedded Linux operating
system and this implementation will reside inside of the Linux kernel.
Model
Digital Document
Publisher
Florida Atlantic University
Description
We present an implementation of the IEEE WAVE (Wireless Access in Vehicular Environments) 1609.4 standard, Multichannel Operation. This implementation provides concurrent access to a control channel and one or more service channels, enabling vehicles to communicate among each other on multiple service channels while
still being able to receive urgent and control information on the control channel. Also
included is functionality that provides over-the-air timing synchronization, allowing
participation in alternating channel access in the absence of a reliable time source.
Our implementation runs on embedded Linux and is built on top of IEEE 802.11p, as
well as a customized device driver. This implementation will serve as a key compo-
nent in our IEEE 1609-compliant Vehicular Multi-technology Communication Device
(VMCD) that is being developed for a VANET testbed under the Smart Drive initiative, supported by the National Science Foundation.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Wellness and healthy life are the most common concerns for an individual to lead a happy life. A web-based approach known as Wellness Scoring is being developed taking into people’s concerns for their health issues. In this approach, four different classifiers are being investigated to predict the wellness. In this thesis, we investigated four different classifiers (a probabilistic graphical model, simple probabilistic classifier, probabilistic statistical classification and an artificial neural network) to predict the wellness outcome. An approach to calculate wellness score is also addressed. All these classifiers are trained on real data, hence giving more accurate results. With this solution, there is a better way of keeping track of an individuals’ health. In this thesis, we present the design and development of such a system and evaluate the performance of the classifiers and design considerations to maximize the end user experience with the application. A user experience model capable of predicting the wellness score for a given set of risk factors is developed.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Network architectures are described by the International Standard for
Organization (ISO), which contains seven layers. The internet uses four of these layers,
of which three are of interest to us. These layers are Internet Protocol (IP) or Network
Layer, Transport Layer and Application Layer. We need to protect against attacks that
may come through any of these layers. In the world of network security, systems are plagued by various attacks, internal and external, and could result in Denial of Service (DoS) and/or other damaging effects. Such attacks and loss of service can be devastating for the users of the system. The implementation of security devices such as Firewalls and Intrusion Detection Systems
(IDS), the protection of network traffic with Virtual Private Networks (VPNs), and the
use of secure protocols for the layers are important to enhance the security at each of
these layers.We have done a survey of the existing network security patterns and we have written the missing patterns. We have developed security patterns for abstract IDS, Behavior–based IDS and Rule-based IDS and as well as for Internet Protocol Security (IPSec) and Transport Layer Security (TLS) protocols. We have also identified the need for a VPN pattern and have developed security patterns for abstract VPN, an IPSec VPN and a TLS VPN. We also evaluated these patterns with respect to some aspects to simplify their application by system designers. We have tried to unify the security of the network layers using security patterns by tying in security patterns for network transmission, network protocols and network boundary devices.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Cloud Computing is a new computing model consists of a large pool of hardware
and software resources on remote datacenters that are accessed through the Internet.
Cloud Computing faces significant obstacles to its acceptance, such as security,
virtualization, and lack of standardization. For Cloud standards, there is a long debate
about their role, and more demands for Cloud standards are put on the table. The Cloud
standardization landscape is so ambiguous. To model and analyze security standards for
Cloud Computing and web services, we have surveyed Cloud standards focusing more on
the standards for security, and we classified them by groups of interests. Cloud
Computing leverages a number of technologies such as: Web 2.0, virtualization, and
Service Oriented Architecture (SOA). SOA uses web services to facilitate the creation of
SOA systems by adopting different technologies despite their differences in formats and
protocols. Several committees such as W3C and OASIS are developing standards for web services; their standards are rather complex and verbose. We have expressed web services security standards as patterns to make it easy for designers and users to understand their key points. We have written two patterns for two web services standards; WS-Secure Conversation, and WS-Federation. This completed an earlier work we have done on web services standards. We showed relationships between web services security standards and used them to solve major Cloud security issues, such as, authorization and access control, trust, and identity management. Close to web services, we investigated Business Process Execution Language (BPEL), and we addressed security considerations in BPEL and how to enforce them. To see how Cloud vendors look at web services standards, we took Amazon Web Services (AWS) as a case-study. By reviewing AWS documentations, web services security standards are barely mentioned. We highlighted some areas where web services security standards could solve some AWS limitations, and improve AWS security process. Finally, we studied the security guidance of two major Cloud-developing organizations, CSA and NIST. Both missed the quality of attributes offered by web services security standards. We expanded their work and added benefits of adopting web services security standards in securing the Cloud.