High performance computing

Model
Digital Document
Publisher
Florida Atlantic University
Description
Hospital readmission rates are considered to be an important indicator of quality of care because they may be a consequence of actions of commission or omission made during the initial hospitalization of the patient, or as a consequence of poorly managed transition of the patient back into the community. The negative impact on patient quality of life and huge burden on healthcare system have made reducing hospital readmissions a central goal of healthcare delivery and payment reform efforts.
In this study, we will be proposing a framework on how the readmission analysis and other healthcare models could be deployed in real world and a Machine learning based solution which uses patients discharge summaries as a dataset to train and test the machine learning model created. Current systems does not take into consideration one of the very important aspect of solving readmission problem by taking Big data into consideration. This study also takes into consideration Big data aspect of solutions which can be deployed in the field for real world use. We have used HPCC compute platform which provides distributed parallel programming platform to create, run and manage applications which involves large amount of data. We have also proposed some feature engineering and data balancing techniques which have shown to greatly enhance the machine learning model performance. This was achieved by reducing the dimensionality in the data and fixing the imbalance in the dataset.
The system presented in this study provides a real world machine learning based predictive modeling for reducing readmissions which could be templatized for other diseases.
Model
Digital Document
Publisher
Florida Atlantic University
Description
System modeling has the potential to enhance system design productivity by providing a
platform for system performance evaluations. This model must be designed at an abstract
level, hiding system details. However, it must represent any subsystem or its components
at any level of specification details. In order to model such a system, we will need to
combine various models-of-computation (MOC). MOC provide a framework to model
various algorithms and activities, while accounting for and exploiting concurrency and
synchronization aspects. Along with supporting various MOC, a modeling environment
should also support a well developed library. In this thesis, we have explored various
modeling environments. MLDesigner (MLD) is one such modeling environment that
supports a well developed library and integrates various MOC. We present an overview
and discuss the process of system modeling with MLD. We further present an abstract
model of a Network-on-Chip in MLD and show latency results for various customizable
parameters for this model.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This research proposes a cluster-based target tracking strategy for one
moving object using wireless sensor networks. The sensor field is organized in 3
hierarchal levels. 1-bit message is sent when a node detects the target.
Otherwise the node stays silent. Since in wireless sensor network nodes have
limited computational resources, limited storage resources, and limited battery,
the code for predicting the target position should be simple, and fast to execute.
The algorithm proposed in this research is simple, fast, and utilizes all available
detection data for estimating the location of the target while conserving energy.
lbis has the potential of increasing the network life time.
A simulation program is developed to study the impact of the field size
and density on the overall performance of the strategy. Simulation results show
that the strategy saves energy while estimating the location of the target with an
acceptable error margin.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The increasing system design complexity is negatively impacting the overall system design productivity by increasing the cost and time of product development. One key to overcoming these challenges is exploiting Component Based Engineering practices. However it is a challenge to select an optimum component from a component library that will satisfy all system functional and non-functional requirements, due to varying performance parameters and quality of service requirements. In this thesis we propose an integrated framework for component selection. The framework is a two phase approach that includes a system modeling and analysis phase and a component selection phase. Three component selection algorithms have been implemented for selecting components for a Network on Chip architecture. Two algorithms are based on a standard greedy method, with one being enhanced to produce more intelligent behavior. The third algorithm is based on simulated annealing. Further, a prototype was developed to evaluate the proposed framework and compare the performance of all the algorithms.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Video identification or copy detection is a challenging problem and is becoming increasingly important with the popularity of online video services. The problem addressed in this thesis is the identification of a given video clip in a given set of videos. For a given query video, the system returns all the instance of the video in the data set. This identification system uses video signatures based on video tomography. A robust and low complexity video signature is designed and implemented. The nature of the signature makes it independent to the most commonly video transformations. The signatures are generated for video shots and not individual frames, resulting in a compact signature of 64 bytes per video shot. The signatures are matched using simple Euclidean distance metric. The results show that videos can be identified with 100% recall and over 93% precision. The experiments included several transformations on videos.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This research is concerned with the technoeconomic aspects of modern and next-generation telecommunications including the Internet service. The goal of this study thereof is tailored to address the following: (i) Reviewing the technoeconomic considerations prevailing in telecommunication (telco) systems and their implicating futures; (ii) studying relevant considerations by depicting the modern/next-generation telecommunications as a digital ecosystem viewed in terms of underlying complex system evolution (akin to biological systems); (iii) pursuant to the digital ecosystem concept, co-evolution modeling of competitive business structures in the technoeconomics of telco services using dichotomous (flip-flop) states as seen in prey-predator evolution; (iv) specific to Internet pricing economics, deducing the profile of consumer surplus versus pricing model under DiffServ QoS architecture pertinent to dynamic- , smart- and static-markets; (v) developing and exemplifying decision-making pursuits in telco business under non-competitive and competitive markets (via gametheoretic approach); (vi) and modeling forecasting issues in telco services addressed in terms of a simplified ARIMA-based time-series approach, (which includes seasonal and non-seasonal data plus goodness-fit estimations in time- and frequency-domains). Commensurate with the topics indicated above, necessary analytical derivations/models are proposed and computational exercises are performed (with MatLabTM R2006b and other software as needed). Extensive data gathered from open literature are used thereof and, ad hoc model verifications are performed. Lastly, results are discussed, inferences are made and open-questions for further research are identified.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Natural matte extraction is a difficult and generally unsolved problem. Generating a matte from a nonuniform background traditionally requires a tediously hand drawn matte. This thesis studies recent methods requiring the user to place only modest scribbles identifying the foreground and the background. This research demonstrates a new GPU-based implementation of the recently introduced Fuzzy- Matte algorithm. Interactive matte extraction was achieved on a CUDA enabled G80 graphics processor. Experimental results demonstrate improved performance over the previous CPU based version. In depth analysis of experimental data from the GPU and the CPU implementations are provided. The design challenges of porting a variant of Dijkstra's shortest distance algorithm to a parallel processor are considered.