Furht, Borko

Person Preferred Name
Furht, Borko
Model
Digital Document
Publisher
Florida Atlantic University
Description
A modern urban infrastructure no longer operates in isolation but instead leverages the latest technologies to collect, process, and distribute aggregated knowledge to improve the quality of the provided services and promote the efficiency of resource consumption. However, the ambiguity of ever-evolving cyber threats and their debilitating consequences introduce new barriers for decision-makers. Numerous techniques have been proposed to address the cyber misdemeanors against such critical realms and increase the accuracy of attack inference; however, they remain limited to detection algorithms omitting attack attribution and impact interpretation. The lack of the latter prompts the transition of these methods to operation difficult to impossible.
In this dissertation, we first investigate the threat landscape of smart cities, survey and reveal the progress in data-driven methods for situational awareness and evaluate their effectiveness when addressing various cyber threats. Further, we propose an approach that integrates machine learning, the theory of belief functions, and dynamic visualization to complement available attack inference for ICS deployed in the realm of smart cities. Our framework offers an extensive scope of knowledge as opposed to solely evident indicators of malicious activity. It gives the cyber operators and digital investigators an effective tool to dynamically and visually interact, explore and analyze heterogeneous, complex data, and provide rich context information. Such an approach is envisioned to facilitate the cyber incident interpretation and support a timely evidence-based decision-making process.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Gliomas are an aggressive class of brain tumors that are associated with a better prognosis at a lower grade level. Effective differentiation and classification are imperative for early treatment. MRI scans are a popular medical imaging modality to detect and diagnosis brain tumors due to its capability to non-invasively highlight the tumor region. With the rise of deep learning, researchers have used convolution neural networks for classification purposes in this domain, specifically pre-trained networks to reduce computational costs. However, with various MRI modalities, MRI machines, and poor image scan quality cause different network structures to have different performance metrics. Each pre-trained network is designed with a different structure that allows robust results given specific problem conditions. This thesis aims to cover the gap in the literature to compare the performance of popular pre-trained networks on a controlled dataset that is different than the network trained domain.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Skin cancer is a major medical problem. If not detected early enough, skin cancer like
melanoma can turn fatal. As a result, early detection of skin cancer, like other types of
cancer, is key for survival. In recent times, deep learning methods have been explored to
create improved skin lesion diagnosis tools. In some cases, the accuracy of these methods
has reached dermatologist level of accuracy. For this thesis, a full-fledged cloud-based
diagnosis system powered by convolutional neural networks (CNNs) with near
dermatologist level accuracy has been designed and implemented in part to increase early
detection of skin cancer. A large range of client devices can connect to the system to
upload digital lesion images and request diagnosis results from the diagnosis pipeline.
The diagnosis is handled by a two-stage CNN pipeline hosted on a server where a
preliminary CNN performs quality check on user requests, and a diagnosis CNN that
outputs lesion predictions.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Three major problems make Genetic Programming unfeasible or impractical
for real world problems.
The first is the excessive time complexity.In nature the evolutionary process
can take millions of years, a time frame that is clearly not acceptable for the solution
of problems on a computer. In order to apply Genetic Programming to real world
problems, it is essential that its efficiency be improved.
The second is called overfitting (where results are inaccurate outside the
training data). In a paper[36] for the Federal Reserve Bank, authors Neely and
Weller state “a perennial problem with using flexible, powerful search procedures
like Genetic Programming is overfitting, the finding of spurious patterns in the data.
Given the well-documented tendency for the genetic program to overfit the data it
is necessary to design procedures to mitigate this.”
The third is the difficulty of determining optimal control parameters for the
Genetic Programming process. Control parameters control the evolutionary process. They include settings such as, the size of the population and the number of generations
to be run. In his book[45], Banzhaf describes this problem, “The bad
news is that Genetic Programming is a young field and the effect of using various
combinations of parameters is just beginning to be explored.”
We address these problems by implementing and testing a number of novel
techniques and improvements to the Genetic Programming process. We conduct
experiments using data sets of various degrees of difficulty to demonstrate success
with a high degree of statistical confidence.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Identifying and tracking individuals affected by this virus in densely
populated areas is a unique and an urgent challenge in the public health sector.
Currently, mapping the spread of the Ebola virus is done manually, however with
the help of social contact networks we can model dynamic graphs and predictive
diffusion models of Ebola virus based on the impact on either a specific person or
a specific community.
With the help of this model, we can make more precise forward
predictions of the disease propagations and to identify possibly infected
individuals which will help perform trace – back analysis to locate the possible
source of infection for a social group. This model will visualize and identify the
families and tightly connected social groups who have had contact with an Ebola
patient and is a proactive approach to reduce the risk of exposure of Ebola
spread within a community or geographic location.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Scene understanding attempts to produce a textual description of visible and
latent concepts in an image to describe the real meaning of the scene. Concepts are
either objects, events or relations depicted in an image. To recognize concepts, the
decision of object detection algorithm must be further enhanced from visual
similarity to semantical compatibility. Semantically relevant concepts convey the
most consistent meaning of the scene.
Object detectors analyze visual properties (e.g., pixel intensities, texture, color
gradient) of sub-regions of an image to identify objects. The initially assigned
objects names must be further examined to ensure they are compatible with each
other and the scene. By enforcing inter-object dependencies (e.g., co-occurrence,
spatial and semantical priors) and object to scene constraints as background
information, a concept classifier predicts the most semantically consistent set of
names for discovered objects. The additional background information that describes
concepts is called context.
In this dissertation, a framework for building context-based concept detection is
presented that uses a combination of multiple contextual relationships to refine the
result of underlying feature-based object detectors to produce most semantically compatible concepts.
In addition to the lack of ability to capture semantical dependencies, object
detectors suffer from high dimensionality of feature space that impairs them.
Variances in the image (i.e., quality, pose, articulation, illumination, and occlusion)
can also result in low-quality visual features that impact the accuracy of detected
concepts.
The object detectors used to build context-based framework experiments in this
study are based on the state-of-the-art generative and discriminative graphical
models. The relationships between model variables can be easily described using
graphical models and the dependencies and precisely characterized using these
representations. The generative context-based implementations are extensions of
Latent Dirichlet Allocation, a leading topic modeling approach that is very
effective in reduction of the dimensionality of the data. The discriminative contextbased
approach extends Conditional Random Fields which allows efficient and
precise construction of model by specifying and including only cases that are
related and influence it.
The dataset used for training and evaluation is MIT SUN397. The result of the
experiments shows overall 15% increase in accuracy in annotation and 31%
improvement in semantical saliency of the annotated concepts.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The road traffic along with other key infrastructure sectors such as telecommunication, power, etc. has an important role in economic and technological growth of one country. Traffic engineers and analysts are responsible for solving a diversity of traffic problems, such as traffic data acquisition and evaluation. In response to the need to improve traffic operation, researchers implement advanced technologies and integration of systems and data, and develop state-of-the-art applications. This thesis introduces three novel web applications with an aim to offer traffic operators, managers, and analysts’ possibility to monitor the congestion, and analyze incidents and signal performance measures. They offer more detailed analysis providing users with insights from different levels and perspectives. The benefit of providing these visualization tools is more efficient estimation of the performance of local networks, thus facilitating the decision making process in case of emergency events.
Model
Digital Document
Publisher
Florida Atlantic University
Description
In current mobile system environment there is a large gap in the use of smart phones for
personal and enterprise use due to required enterprise security policies, privacy concerns
as well as freedom of use. In the current environment, data-plans on mobile systems have
become so wide spread that the rate of adaptation of data plans for every day customers
has far outpaced the ability for enterprises to keep up with existing secure enterprise
infrastructures. Most of the enterprises require/provide the access of emails and other
official information on smart platforms which presents a big challenge for the enterprise
in securing their systems. Therefore due to the security issues and policies imposed by
the enterprise in using the same device for dual purpose (personal and enterprise), the
consumers often lose their individual freedom and convenience at the cost of security.
Few solutions have been successful addressing this challenge. One effective way is to
partition the mobile device such that the enterprise system access and its information are completely separated from the personal information. Several approaches are described
and presented for mobile virtualization that creates a secure and secluded environment for
enterprise information while allowing the user to access their personal information. A
reference architecture is then presented that allows for integration with existing enterprise
mobile device management systems and at the same time providing a light weight
solution for containerizing mobile applications. This solution is then benchmarked with
several of the existing mobile virtualization solutions.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The classic methods in indexing image and video databases are either using keywords or analysis of color distribution. In the recent year, there is a new standard in image and video compression standard called JPEG and MPEG respectively. One of the basic operations of JPEG and MPEG is Discrete Cosine Transform (DCT). The human visual system is known to be very dependent on spatial frequency. The DCT has capability to provide a good approximation of the images' spatial frequency that is sensitive to human eyes. We take this advantage of DCT in indexing image and video databases. However, the two-dimensional DCT can give us 64 coefficients per block of 8 x 8 pixels. These numbers are too many to calculate to receive fast indexing results. We use only first coefficient of DCT called DC coefficient to represent an 8 x 8 block of transformed data. This representation yields satisfactory indexing results.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Over the past ten years, Client/Server computing has had a powerful impact on the way businesses deal with information technology. Client/Server computing has enhanced user's productivity, revolutionized computer networking, and restructured the computer industry. Today, another new technology is poised to impact business computing in an equally dramatic way. Networked Multimedia computer applications will significantly affect users and network managers and have a tremendous impact on computing and network infrastructures. This thesis explores the areas of high speed networking for multimedia applications. Focusing primarily on the FDDI technology we model a high speed FDDI multimedia LAN model and developed typical multimedia traffic models to aid in case study of the FDDI HSMM-LAN networks. FFOL, the Follow On Standards currently in the ANSI standards committee, discuss Network Architectures that include a gigabit backbone network for FDDI and FDDI II networks, making them an attractive and cost effective option to the customer.