Hahn, William

Person Preferred Name
Hahn, William
Model
Digital Document
Publisher
Florida Atlantic University
Description
Working memory (WM) enables the flexible representation of information over short intervals. It is established that WM performance can be enhanced by a retrospective cue presented during storage, yet the neural mechanisms responsible for this benefit are unclear. Here, we tested several explanations for retrospective cue benefits by quantifying changes in spatial WM representations reconstructed from alpha-band (8 - 12 Hz) EEG activity recorded from human participants before and after the presentation of a retrospective cue. This allowed us to track cue-related changes in WM representations with high temporal resolution. Our findings suggest that retrospective cues engage several different mechanisms such as recovery of information previously decreased to baseline after being cued as relevant and protecting the cued item from temporal decay to mitigate information loss during WM storage. Our EEG findings suggest that participants can supplement active memory traces with information from other memory stores. We next sought to better understand these additional store(s) by asking whether they are subject to the same temporal degradation seen in active memory representations during storage. We observed a significant increase in the quality of location representations following a retrocue, but the magnitude of this benefit was linearly and inversely related to the timing of the retrocue such that later cues yielded smaller increases.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Although state-of-the-art Convolutional Neural Networks (CNNs) are often viewed as a model of biological object recognition, they lack many computational and architectural motifs that are postulated to contribute to robust perception in biological neural systems. For example, modern CNNs lack lateral connections, which greatly outnumber feed-forward excitatory connections in primary sensory cortical areas and mediate feature-specific competition between neighboring neurons to form robust, sparse representations of sensory stimuli for downstream tasks. In this thesis, I hypothesize that CNN layers equipped with lateral competition better approximate the response characteristics and dynamics of neurons in the mammalian primary visual cortex, leading to increased robustness under noise and/or adversarial attacks relative to current robust CNN layers. To test this hypothesis, I develop a new class of CNNs called LCANets, which simulate recurrent, feature-specific lateral competition between neighboring neurons via a sparse coding model termed the Locally Competitive Algorithm (LCA). I first perform an analysis of the response properties of LCA and show that sparse representations formed by lateral competition more accurately mirror response characteristics of primary visual cortical populations and are more useful for downstream tasks like object recognition than previous sparse CNNs, which approximate competition with winner-take-all mechanisms implemented via thresholding.
Model
Digital Document
Publisher
Florida Atlantic University
Description
One basic goal of artificial learning systems is the ability to continually learn throughout that system’s lifetime. Transitioning between tasks and re-deploying prior knowledge is thus a desired feature of artificial learning. However, in the deep-learning approaches, the problem of catastrophic forgetting of prior knowledge persists. As a field, we want to solve the catastrophic forgetting problem without requiring exponential computations or time, while demonstrating real-world relevance. This work proposes a novel model which uses an evolutionary algorithm similar to a meta-learning objective, that is fitted with a resource constraint metrics. Four reinforcement learning environments are considered with the shared concept of depth although the collection of environments is multi-modal. This system shows preservation of some knowledge in sequential task learning and protection of catastrophic forgetting in deep neural networks.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Machine learning has been utilized in bio-imaging in recent years, however as it is relatively new and evolving, some researchers who wish to utilize machine learning tools have limited access because of a lack of programming knowledge. In electron microscopy (EM), immunogold labeling is commonly used to identify the target proteins, however the manual annotation of the gold particles in the images is a time-consuming and laborious process. Conventional image processing tools could provide semi-automated annotation, but those require that users make manual adjustments for every step of the analysis. To create a new high-throughput image analysis tool for immuno-EM, I developed a deep learning pipeline that was designed to deliver a completely automated annotation of immunogold particles in EM images. The program was made accessible for users without prior programming experience and was also expanded to be used on different types of immuno-EM images.