Recognition (Psychology)

Model
Digital Document
Publisher
Florida Atlantic University
Description
This study investigated electroencephalographic differences related to cue (central left- or right-directed arrows) in a covert endogenous visual spatial attention task patterned after that of Hopf and Mangun (2000). This was done with the intent of defining the timing of components in relation to cognitive processes within the cue-target interval. Multiple techniques were employed to do this. Event-related potentials (ERPs) were examined using Independent Component Analysis. This revealed a significant N1, between 100:200 ms post-cue, greater contralateral to the cue. Difference wave ERPs, left minus right cue-locked data, divulged significant early directing attention negativity (EDAN) at 200:400 ms post-cue in the right posterior which reversed polarity in the left posterior. Temporal spectral evolution (TSE) analysis of the alpha band revealed three stages, (1) high bilateral alpha precue to 120 ms post-cue, (2) an event related desynchronization (ERD) from approximately 120 ms: 500 ms post-cue, and (3) an event related synchronization (ERS) rebound, 500: 900 ms post-cue, where alpha amplitude, a measure of activity, was highest contralateral to the ignored hemifield and lower contralateral to the attended hemifield. Using a combination of all of these components and scientific literature in this field, it is possible to plot out the time course of the cognitive events and their neural correlates.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Across three experiments, we assessed how location and color information contributes to the identification of an object whose image has been degraded, making its identity ambiguous. In Experiment 1, some of the target objects had fixed locations within the scene. We found that subjects used this location information during search and later to identify the blurred target objects. In Experiment 2, we tested whether location and color information can be combined to identify degraded objects, and results were inconclusive. In Experiment 3, both the location and color of each object was variable but statistically predictive of the object's identity. We found that subjects used both sources of information-color and location - equally when identifying the blurred image of the object. Overall, these findings suggest that location information may be as determining as intrinsic feature information to identify objects when the objects' intrinsic features are degraded.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Two experiments were conducted to investigate the impact of poser and perceiver gender on the Happiness/Anger Superiority effect and the Female Advantage in facial expression recognition. Happy, neutral, and angry facial expressions were presented on male and female faces under Continuous Flash Suppression (CFS). Participants of both genders indicated when the presented faces broke through the suppression. In the second experiment, angry and happy expressions were reduced to 50% intensity. At full intensity, there was no difference in the reaction time for female neutral and angry faces, but male faces showed a difference in detection between all expressions. Across experiments, male faces were detected later than female faces for all facial expressions. Happiness was generally detected faster than anger, except when on female faces at 50% intensity. No main effect for perceiver gender emerged. It was concluded that happiness is superior to anger in CFS, and that poser gender affects facial expression recognition.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Recent research in visual object recognition has shown that context can facilitate object recognition. This study assessed the effect of self-relevant familiarity of context in object recognition. Participants performed a task in which they had to recognize degraded objects shown under varying levels of contextual information. The level of degradation at which they could successfully recognize the target object was used as a measure of performance. There were five contextual conditions: (1) no context, (2) context, (3) context and size, (4) context and location, (5) context, size and location. Within each contextual condition, we compared the performance of "Expert" participants who viewed objects in the context of their own house and "Novice" participants who viewed those particular settings for the first time. Ratings were performed to assess each object's consistency, frequency, position consistency, typicality and shape distinctiveness. Object's size was the only contextual info rmation that did not affect performance. Contextual information significantly reduced the amount of bottom-up visual information needed for object identification for both experts and novices. An interaction (Contextual Information x Level of Familiarity) was observed. Expert participants' performance improved significantly more than novice participants' performance by the presence of contextual information. Location information affected the performance of expert participants, only when objects that occupied stable positions were considered. Both expert and novice participants performed better with objects that rated high in typicality and shape distinctiveness. Object's consistency, frequency and position consistency did not seem to affect expert participants' performance but did affect novice participants' performance.