Visual perception

Model
Digital Document
Publisher
Florida Atlantic University
Description
It is well established that anticipation of the arrival of an expected stimulus is accompanied by rich ongoing oscillatory neurodynamics, which span and link large areas of cortex. An intriguing possibility is that these dynamic interactions may convey knowledge that is embodied by large-scale neurocognitive networks from higher level regions of multi-model cortex to lower level primary sensory areas. In the current study, using autoregressive spectral analysis, we establish that during the anticipatory phase of a visual discrimination task there are rich patterns of coherent interaction between various levels of the ventral visual hierarchy across the frequency spectrum of 8 - 90 Hz. Using spectral Granger causality we determined that a subset of these interactions carry beta frequency (14 - 30 Hz) top-down influences from higher level visual regions V4 and TEO to primary visual cortex. We investigated the functional significance of these top-down interactions by correlating the magnitude of the anticipatory signals with the amplitude of the visual evoked potential that was elicited by stimulus processing. We found that in one third of the extrastriate-striate pairs, tested in three monkeys, the amplitude of the visual evoked response is well predicted by the magnitude of pre-stimulus coherent top-down anticipatory influences. To investigate the dynamics of the coherent and topdown Granger causal interactions, we analyzed the relationship between coherence and top-down Granger causality with stimulus onset asynchrony. This analysis revealed that in an abundance of cases the magnitudes of the coherent interactions and top-down directional influences scaled with the length of time that had elapsed before stimulus onset.
Model
Digital Document
Publisher
Florida Atlantic University
Description
It has been argued that the perception of apparent motion is based on the detection of counterchange (oppositely signed changes in luminance contrast at pairs of spatial locations) rather than motion energy (spatiotemporal changes in luminance). A constraint in furthering this distinction is that both counterchange and motion energy are present for most motion stimuli. Three experiments used illusory-contour and luminance-based stimuli to segregate (experiments 1 and 2) and combine (experiment 3) counterchange and motion energy information. Motion specified by counterchange was perceived for translating illusory squares over a wide range of frame durations, and preferentially for short motion paths. Motion specified by motion energy was diminished by relatively long frame durations, but was not affected by the length of the motion path. Results for the combined stimulus were consistent with counterchange as the basis for apparent motion perception, despite the presence of motion energy.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Perceptual video coding has been a promising area during the last years. Increases in compression ratios have been reported by applying foveated video coding techniques where the region of interest (ROI) is selected by using a computational attention model. However, most of the approaches for perceptual video coding only use visual features ignoring the auditory component. In recent physiological studies, it has been demonstrated that auditory stimuli affects our visual perception. In this work, we validate some of those physiological tests using complex video sequence. We designed and developed a web-based tool for video quality measurement. After conducting different experiments, we observed that in the general reaction time to detect video artifacts was higher when video was presented with the audio information. We observed that emotional information in audio guide human attention to particular ROI. We also observed that sound frequency change spatial frequency perception in still images.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Across three experiments, we assessed how location and color information contributes to the identification of an object whose image has been degraded, making its identity ambiguous. In Experiment 1, some of the target objects had fixed locations within the scene. We found that subjects used this location information during search and later to identify the blurred target objects. In Experiment 2, we tested whether location and color information can be combined to identify degraded objects, and results were inconclusive. In Experiment 3, both the location and color of each object was variable but statistically predictive of the object's identity. We found that subjects used both sources of information-color and location - equally when identifying the blurred image of the object. Overall, these findings suggest that location information may be as determining as intrinsic feature information to identify objects when the objects' intrinsic features are degraded.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The literature regarding biological motion suggests that people may accurately identify and recognize the gender of others using movement cues in the absence of typical identifiers. This study compared identification and gender judgments of traditional point-light stimuli to skeleton stimuli. Controlling for previous experience and execution of actions, the frequency and familiarity of movements was also considered. Watching action clips, participants learned to identify 4 male and 4 female actors. Participants then identified the corresponding point-light or skeleton displays. Although results indicate higher than chance performance, no difference was observed between stimuli conditions. Analyses did show better gender recognition for common as well as previously viewed actions. This suggests that visual experience influences extraction and application of biological motion. Thus insufficient practice in relying on movement cues for identification could explain the significant yet poor performance in biological motion point-light research.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Little is known about the visual capabilities of marine turtles. The ability to discriminate between colors has not been adequately demonstrated on the basis of behavioral criteria. I used a three-part methodology to determine if color discrimination occurred. FIrst, I exposed naèive, light-adapted hatchlings to either a blue, green or yellow light. I manipulated light intensity to obtain a behavioral phototaxis threshold to each color, which provided a range of intensities we knew turtles could detect. Second, I used food to train older turtles to swim toward one light color, and then to discriminate between the rewarded light and another light color ; lights were presented at intensities equally above the phototaxis threshold. Lastly, I varied light intensity so that brightness could not be used as a discrimination cue. Six turtles completed this task and showed a clear ability to select a rewarded over a non-rewarded color, regardless of stimulus intensity. Turtles most rapidly learned to associate shorter wavelengths (blue) with food. My results clearly show loggerheads have color vision. Further investigation is required to determine how marine turtles exploit this capability.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Background: Light-adaptation is a multifaceted process in the retina that helps adjust the visual system to changing illumination levels. Many studies are focused on the photochemical mechanism of light-adaptation. Neural network adaptation mechanisms at the photoreceptor synapse are largely unknown. We find that large, spontaneous Excitatory Amino Acid Transporter (EAATs) activity in cone terminals may contribute to cone synaptic adaptation, specifically with respect to how these signals change in differing conditions of light. EAATs in neurons quickly transport glutamate from the synaptic cleft, and also elicit large thermodynamically uncoupled Cl- currents when activated. We recorded synaptic EAAT currents from cones to study glutamate-uptake events elicited by glutamate release from the local cone, and from adjacent photoreceptors. We find that cones are synaptically connected via EAATs in dark ; this synaptic connection is diminished in light-adapted cones. Methods: Whole-cell patch-clamp was performed on dark- and transiently light-adapted tiger salamander cones. Endogenous EAAT currents were recorded in cones with a short depolarization to -10mV/2ms, while spontaneous transporter currents from network cones were observed while a local cone holding at -70mV constantly. DHKA, a specific transporter inhibitor, was used to identify EAAT2 currents in the cone terminals, while TBOA identified other EAAT subtypes. GABAergic and glycinergic network inputs were always blocked with picrotoxin and strychnine. Results: Spontaneous EAAT currents were observed in cones held constantly at -70mV in dark, indicating that the cones received glutamate inputs from adjacent photoreceptors. These spontaneous EAAT currents disappeared in presence of a strong light, possibly because the light suppressed glutamate releases from the adjacent photoreceptors. The spontaneous EAAT currents were blocked with TBOA, but not DHKA, an inhibitor for EAAT2 subtype, suggesting that a
Model
Digital Document
Publisher
Florida Atlantic University
Description
Contemporary understanding of human visual spatial attention rests on the hypothesis of a top-down control sending from cortical regions carrying higher-level functions to sensory regions. Evidence has been gathered through functional Magnetic Resonance Imaging (fMRI) experiments. The Frontal Eye Field (FEF) and IntraParietal Sulcus (IPS) are candidates proposed to form the frontoparietal attention network for top-down control. In this work we examined the influence patterns between frontoparietal network and Visual Occipital Cortex (VOC) using a statistical measure, Granger Causality (GC), with fMRI data acquired from subjects participated in a covert attention task. We found a directional asymmetry in GC between FEF/IPS and VOC, and further identified retinotopically specific control patterns in top-down GC. This work may lead to deeper understanding of goal-directed attention, as well as the application of GC to analyzing higher-level cognitive functions in healthy functioning human brain.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Recent research in visual object recognition has shown that context can facilitate object recognition. This study assessed the effect of self-relevant familiarity of context in object recognition. Participants performed a task in which they had to recognize degraded objects shown under varying levels of contextual information. The level of degradation at which they could successfully recognize the target object was used as a measure of performance. There were five contextual conditions: (1) no context, (2) context, (3) context and size, (4) context and location, (5) context, size and location. Within each contextual condition, we compared the performance of "Expert" participants who viewed objects in the context of their own house and "Novice" participants who viewed those particular settings for the first time. Ratings were performed to assess each object's consistency, frequency, position consistency, typicality and shape distinctiveness. Object's size was the only contextual info rmation that did not affect performance. Contextual information significantly reduced the amount of bottom-up visual information needed for object identification for both experts and novices. An interaction (Contextual Information x Level of Familiarity) was observed. Expert participants' performance improved significantly more than novice participants' performance by the presence of contextual information. Location information affected the performance of expert participants, only when objects that occupied stable positions were considered. Both expert and novice participants performed better with objects that rated high in typicality and shape distinctiveness. Object's consistency, frequency and position consistency did not seem to affect expert participants' performance but did affect novice participants' performance.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Number perception, its neural basis and its relationship to how numerical stimuli are presented have been challenging research topics in cognitive neuroscience for many years. A primary question that has been addressed is whether the perception of the quantity of a visually presented number stimulus is dissociable from its early visual perception. The present study examined the possible influence of visual quality judgment on quantity judgments of numbers. To address this issue, volunteer adult subjects performed a mental number comparison task in which two-digit stimulus numbers (Arabic number format), among the numbers between 31 and 99 were mentally compared to a memorized reference number, 65. Reaction times (RTs) and neurophysiological (i.e. electroencephalographic (EEG) data) responses were acquired simultaneously during performance of the two-digit number comparison task. In this particular quantity comparison task, the number stimuli were classified into three distance factors. That is, numbers were a close, medium or far distance from the reference number (i.e., 65). In order to evaluate the relationship between numerical stimulus quantity and quality, the number stimuli were embedded in varying degrees of a typical visual noise form, known as "salt and pepper noise" (e.g., the visual noise one perceives when viewing a photograph taken with a dusty camera lens). In this manner, the visual noise permitted visual quality to be manipulated across three levels: no noise, medium noise (approximately 60% degraded visual quality from nonoise), and dense noise (75% degraded visual quality from no-noise).