LaCombe, Daniel C. Jr.

Relationships
Member of: Graduate College
Person Preferred Name
LaCombe, Daniel C. Jr.
Model
Digital Document
Publisher
Florida Atlantic University
Description
A hypothesis for the self-organization of receptive fields throughout the hierarchy
of biological vision is empirically tested using simulations of deep artificial neural
networks. Results from many fields for topographic organization of receptive fields
throughout the visual hierarchy remain disconnected. Although extensive simulation
research has been done to model topographic organization in early visual areas, little
to no research has investigated such organization in higher visual areas. We propose
that parsimonious structured sparsity principles, that permit the learning of topographic
receptive fields in simulated visual areas, are sufficient for the emergence of
a semantic topology in object-level representations of a deep neural network. These
findings suggest wide-reaching implications for the functional organization of the biological
visual system and we conjecture that such observed results in nature could
serve as the foundation for unsupervised learning of taxonomic and semantic relations
between entities in the world.
Model
Digital Document
Publisher
Florida Atlantic University
Description
In recent years, there has been a surge of interest in the possibility of using machine-learning
techniques to decode generating properties of eye-movement data. Here we explore a relatively new
approach to eye movement quantification, Recurrence Quantification Analysis RQA— which allows
analysis of spatio-temporal fixation patterns — and assess its diagnostic power with respect to task
decoding. Fifty participants completed both aesthetic-judgment and visual-search tasks over natural
images of indoor scenes. Six different sets of features were extracted from the eye movement data,
including aggregate, fixation-map, and RQA measures. These feature vectors were then used to train
six separate support vector machines using an n-fold cross validation procedure in order to classify a
scanpath as being generated under either an aesthetic-judgment or visual- search task. Analyses
indicated that all classifiers decoded task significantly better than chance. Pairwise comparisons
revealed that all RQA feature sets afforded significantly greater decoding accuracy than the aggregate
features. The superior performance of RQA features compared to the others may be that they are
relatively invariant to changes in observer or stimulus; although RQA features significantly decoded
observer- and stimulus-identity, analyses indicated that spatial distribution of fixations were most
informative about stimulus-identity whereas aggregate measures were most informative about
observer-identity. Therefore, changes in RQA values could be more confidently attributed to changes in
task, rather than observer or stimulus, relative to the other feature sets. The findings of this research
have significant implications for the application of RQA in studying eye-movement dynamics in topdown
attention.