Facial expression

Model
Digital Document
Publisher
Florida Atlantic University
Description
The perception and interpretation of faces provides individuals with a wealth of knowledge that enables them to navigate their social environments more successfully. Prior research has hypothesized that the decreased facial expression recognition (FER) abilities observed in autism spectrum disorder (ASD) may be better explained by comorbid alexithymia, the alexithymia hypothesis. The present study sought to further examine the alexithymia hypothesis by collecting data from 59 participants and examining FER performance and eye movement patterns for ASD and neurotypical (NT) individuals while controlling for alexithymia severity. Eye movement-related differences and similarities were examined via eye tracking in conjunction with statistical and machine-learning-based pattern classification analysis. In multiple different classifying conditions, where the classifier was fed 1,718 scanpath images (either at spatial, spatial-temporal, or spatial temporal-ordinal levels) for high-alexithymic ASD, high-alexithymicvi NT, low-alexithymic ASD, and low-alexithymic NT, we could accurately decode significantly above chance level. Additionally, in the cross-decoding analysis where the classifier was fed 1,718 scanpath images for high- and low alexithymic ASD individuals and tested on high- and low-alexithymic NT individuals, results showed that classification accuracy was significantly above chance level when using spatial images of eye movement patterns. Regarding FER performance results, we found that ASD and NT groups performed similarly, but at lower intensities of expressions, ASD individuals performed significantly worse than NT individuals. Together, these findings suggest that there may be eye-movement related differences between ASD and NT individuals, which may interact with alexithymia traits.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The perception and interpretation of faces provides individuals with a wealth of knowledge that enables them to navigate their social environments more successfully. The present study examined the temporal dynamics of valence information from emotional facial expressions using electroencephalogram (EEG) in conjunction with multi-variate pattern analysis (MVPA). In multiple different classifying conditions, it was demonstrated that when decoding for a positively- vs. a negatively- vs. a neutrally-valenced expression, above chance level decoding accuracy occurs sooner when compared to instances of decoding for a negatively- vs. a negatively- vs. a neutrally-valenced expression. Additionally, results showed that classification accuracy as measured by percentage of correct responses was higher in the classification condition with the positively-valenced expression versus the one with two negatively-valenced
expressions. Together, these finding suggest that neural processing of facial expression may occur hierarchical manner, in that categorization between between-valence (positive vs. negative) facial expressions precedes categorization among within-valence.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The present study aimed to gain a better understanding of the emotion processing abilities of children between the ages of 4 and 8 with ASD by examining their ability to correctly recognize dynamic displays of emotion. Additionally, we examined whether children with ASD showed emotion specific differences in their ability to accurately identify anger, happiness, sadness, and fear. Participants viewed a continuous display of neutral faces morphing into expressions of emotion. We aimed to measure observed power and asymmetry using EEG data in order to understand the neural activity that underlies the social aspects of ASD. Participants with ASD showed slower processing speed and decreased emotion sensitivity. On tasks that involved the recognition of expressions on the participants’ mothers’ faces, differences were less apparent. These results suggest that children with ASD are capable of recognizing facial displays of emotion after repeated exposure, this should be explored further in future research.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Two experiments were conducted to investigate the impact of poser and perceiver gender on the Happiness/Anger Superiority effect and the Female Advantage in facial expression recognition. Happy, neutral, and angry facial expressions were presented on male and female faces under Continuous Flash Suppression (CFS). Participants of both genders indicated when the presented faces broke through the suppression. In the second experiment, angry and happy expressions were reduced to 50% intensity. At full intensity, there was no difference in the reaction time for female neutral and angry faces, but male faces showed a difference in detection between all expressions. Across experiments, male faces were detected later than female faces for all facial expressions. Happiness was generally detected faster than anger, except when on female faces at 50% intensity. No main effect for perceiver gender emerged. It was concluded that happiness is superior to anger in CFS, and that poser gender affects facial expression recognition.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Many errors in recognition are made because various features of a stimulus are attended inefficiently. Those features are not bound together and can then be confused with other information. One of the most common types of these errors is conjunction errors. These happen when mismatched features of memories are combined to form a composite memory. This study tests how likely conjunction errors, along with other recognition errors, occur when participants watch videos of people both with and without unusual facial features performing actions after a week time lag. It was hypothesized that participants would falsely recognize actresses in the conjunction item condition over the other conditions. The likelihood of falsely recognizing a new person increased when presented with a feature, but the conjunction items overall were most often falsely recognized.
Model
Digital Document
Publisher
Florida Atlantic University
Description
This study aimed to understand the differences in strength or coordination of brain regions involved in processing faces in the presence of aging and/or progressing neuropathology (Alzheimer's disease). To this end, Experiment 1 evaluated age-related differences in basic face processing and the effects of familiarity in face processing. Overall, face processing in younger (22-35yrs) and older participants (63-83yrs) recruited a broadly distributed network of brain activity, but the distribution of activity varied depending on the age of the individual. The younger population utilized regions of the occipitotemporal, medial frontal and posterior parietal cortices while the older population recruited a concentrated occipitotemporal network. The younger participants were also sensitive to the type of face presented, as Novel faces were associated with greater mean BOLD activity than either the Famous or Relatives faces. Interestingly, Relatives faces were associated with greater mean B OLD activity in more regions of the brain than found in any other analysis in Exp. 1, spanning the inferior frontal, medial temporal and inferior parietal cortices. In contrast, the older adults were not sensitive to the type of face presented, which could reflect a difference in cognitive strategies used by the older population when presented with this type of face stimuli. Experiment 2 evaluated face processing, familiarity in face processing and also emphasized the interactive roles autobiographical processing and memory recency play in processing familiar faces in mature adults (MA; 45-55yrs), older adults (OA; 70-92yrs) and patients suffering from Alzheimer's disease (AD; 70-92yrs).