Pattern recognition systems.

Model
Digital Document
Publisher
Florida Atlantic University
Description
Studies exploring facial emotion recognition (FER) abilities in autism spectrum
disorder (ASD) samples have yielded inconsistent results despite the widely-accepted
finding that an impairment in emotion recognition is a core component of ASD. The
current study aimed to determine if an FER task featuring both unfamiliar and familiar
faces would highlight additional group differences between ASD children and typically
developing (TD) children. We tested the two groups of 4- to 8-year-olds on this revised
task, and also compared their resting-state brain activity using electroencephalogram
(EEG) measurements. As hypothesized, the TD group had significantly higher overall
emotion recognition percent scores. In addition, there was a significant interaction effect
of group by familiarity, with the ASD group recognizing emotional expressions
significantly better in familiar faces than in unfamiliar ones. This finding may be related
to the preference of children with autism for people and situations which they are accustomed to. TD children did not demonstrate this pattern, as their recognition scores
were approximately the same for familiar faces and unfamiliar ones. No significant group
differences existed for EEG alpha power or EEG alpha asymmetry in frontal, central,
temporal, parietal, or occipital brain regions. Also, neither of these EEG measurements
were strongly correlated with the group FER performances. Further evidence is needed to
assess the association between neurophysiological measurements and behavioral
symptoms of ASD. The behavioral results of this study provide preliminary evidence that
an FER task featuring both familiar and unfamiliar expressions produces a more optimal
assessment of emotion recognition ability.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Object recognition is imperfect; often incomplete processing or deprived
information yield misperceptions (i.e., misidentification) of objects. While quickly
rectified and typically benign, instances of such errors can produce dangerous
consequences (e.g., police shootings). Through a series of experiments, this study
examined the competitive process of multiple object interpretations (candidates) during
the earlier stages of object recognition process using a lexical decision task paradigm.
Participants encountered low-pass filtered objects that were previously demonstrated to
evoke multiple responses: a highly frequented interpretation (“primary candidates”) and a
lesser frequented interpretation (“secondary candidates”). When objects were presented
without context, no facilitative effects were observed for primary candidates. However,
secondary candidates demonstrated evidence for being actively suppressed.
Model
Digital Document
Publisher
Florida Atlantic University
Description
In the past few years, violence detection has become an increasingly rele-
vant topic in computer vision with many proposed solutions by researchers. This
thesis proposes a solution called Criminal Aggression Recognition Engine (CARE),
an OpenCV based Java implementation of a violence detection system that can be
trained with video datasets to classify action in a live feed as non-violent or violent.
The algorithm extends existing work on fast ght detection by implementing violence
detection of live video, in addition to prerecorded video. The results for violence
detection in prerecorded videos are comparable to other popular detection systems
and the results for live video are also very encouraging, making the work proposed in
this thesis a solid foundation for improved live violence detection systems.
Model
Digital Document
Publisher
Florida Atlantic University
Description
A self-adaptive software is developed to predict the stock market. It’s Stock
Prediction Engine functions autonomously when its skill-set suffices to achieve its goal,
and it includes human-in-the-loop when it recognizes conditions benefiting from more
complex, expert human intervention. Key to the system is a module that decides of
human participation. It works by monitoring three mental states unobtrusively and in real
time with Electroencephalography (EEG). The mental states are drawn from the
Opportunity-Willingness-Capability (OWC) model. This research demonstrates that the
three mental states are predictive of whether the Human Computer Interaction System
functions better autonomously (human with low scores on opportunity and/or
willingness, capability) or with the human-in-the-loop, with willingness carrying the
largest predictive power. This transdisciplinary software engineering research
exemplifies the next step of self-adaptive systems in which human and computer benefit from optimized autonomous and cooperative interactions, and in which neural inputs
allow for unobtrusive pre-interactions.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Scene understanding attempts to produce a textual description of visible and
latent concepts in an image to describe the real meaning of the scene. Concepts are
either objects, events or relations depicted in an image. To recognize concepts, the
decision of object detection algorithm must be further enhanced from visual
similarity to semantical compatibility. Semantically relevant concepts convey the
most consistent meaning of the scene.
Object detectors analyze visual properties (e.g., pixel intensities, texture, color
gradient) of sub-regions of an image to identify objects. The initially assigned
objects names must be further examined to ensure they are compatible with each
other and the scene. By enforcing inter-object dependencies (e.g., co-occurrence,
spatial and semantical priors) and object to scene constraints as background
information, a concept classifier predicts the most semantically consistent set of
names for discovered objects. The additional background information that describes
concepts is called context.
In this dissertation, a framework for building context-based concept detection is
presented that uses a combination of multiple contextual relationships to refine the
result of underlying feature-based object detectors to produce most semantically compatible concepts.
In addition to the lack of ability to capture semantical dependencies, object
detectors suffer from high dimensionality of feature space that impairs them.
Variances in the image (i.e., quality, pose, articulation, illumination, and occlusion)
can also result in low-quality visual features that impact the accuracy of detected
concepts.
The object detectors used to build context-based framework experiments in this
study are based on the state-of-the-art generative and discriminative graphical
models. The relationships between model variables can be easily described using
graphical models and the dependencies and precisely characterized using these
representations. The generative context-based implementations are extensions of
Latent Dirichlet Allocation, a leading topic modeling approach that is very
effective in reduction of the dimensionality of the data. The discriminative contextbased
approach extends Conditional Random Fields which allows efficient and
precise construction of model by specifying and including only cases that are
related and influence it.
The dataset used for training and evaluation is MIT SUN397. The result of the
experiments shows overall 15% increase in accuracy in annotation and 31%
improvement in semantical saliency of the annotated concepts.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Cardiac auscultation, an important part of the physical examination, is difficult for
many primary care providers. As a result, diagnoses are missed or auscultatory signs
misinterpreted. A reliable, automated means of interpreting cardiac auscultation should
be of benefit to both the primary care provider and to patients. This paper explores a
novel approach to this problem and develops an algorithm that can be expanded to
include all the necessary electronics and programming to develop such a device. The
algorithm is explained and its shortcomings exposed. The potential for further
development is also expounded.