Eye tracking

Model
Digital Document
Publisher
Florida Atlantic University
Description
Due to the increased integration of robots into industrial, service, and educational settings it is important to understand how and why individuals interact with robots. The current study aimed to explore the extent to which individuals are receptive to nonverbal communication from a robot compared to a human, and the individual differences and stimuli attributes that are related to trust ratings. A combination of eyetracking and survey measures were used to collect data, and a robot and human both performed the same gesture to allow for direct comparison of gaze patterns. Individuals utilized the offered information equivalently from agents. Survey measures indicated that trust ratings significantly differed between agents, and the perceived likability and intelligence of the agent were the greatest predictors of increased trust.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The perception and interpretation of faces provides individuals with a wealth of knowledge that enables them to navigate their social environments more successfully. Prior research has hypothesized that the decreased facial expression recognition (FER) abilities observed in autism spectrum disorder (ASD) may be better explained by comorbid alexithymia, the alexithymia hypothesis. The present study sought to further examine the alexithymia hypothesis by collecting data from 59 participants and examining FER performance and eye movement patterns for ASD and neurotypical (NT) individuals while controlling for alexithymia severity. Eye movement-related differences and similarities were examined via eye tracking in conjunction with statistical and machine-learning-based pattern classification analysis. In multiple different classifying conditions, where the classifier was fed 1,718 scanpath images (either at spatial, spatial-temporal, or spatial temporal-ordinal levels) for high-alexithymic ASD, high-alexithymicvi NT, low-alexithymic ASD, and low-alexithymic NT, we could accurately decode significantly above chance level. Additionally, in the cross-decoding analysis where the classifier was fed 1,718 scanpath images for high- and low alexithymic ASD individuals and tested on high- and low-alexithymic NT individuals, results showed that classification accuracy was significantly above chance level when using spatial images of eye movement patterns. Regarding FER performance results, we found that ASD and NT groups performed similarly, but at lower intensities of expressions, ASD individuals performed significantly worse than NT individuals. Together, these findings suggest that there may be eye-movement related differences between ASD and NT individuals, which may interact with alexithymia traits.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Even during fixation, the eye is rarely still, as miniature eye movements continue to occur within fixational periods of the eye. These miniature movements are referred to as fixational eye movements. Microsaccades are one of the three types of fixational eye movements that have been identified. Microsaccades have been attributed to different visual processes/phenomena such as fixation stability, perceptual fading, and multistable perception. Still, debates surrounding the functional role of microsaccades in vision ensued, as many of the findings from earlier microsaccade reports contradict one another and the polarity in the field caused by these debates led many to believe that microsaccades do not hold a necessary/specialized role in vision. To gain a deeper understanding of microsaccades and its relevance in vision, we sought out to assess the role of microsaccades in bistable motion perception in our behavioral/eye-tracking study. Observers participated in an eye-tracking experiment where they were asked to complete a motion discrimination task while viewing a bistable apparent motion stimuli. The collected eye-tracking data was then used to train a classification model to predict directions of illusory motion perceived by observers. We found that small changes in gaze position during fixation, occurring within or outside microsaccadic events, predicted the direction of motion pattern imposed by the motion stimuli. Our findings suggest that microsaccades and fixational eye movements are correlated with motion perception and that miniature eye movements occurring during fixation may have relevance in vision.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Robotics have advanced to include highly anthropomorphic (human-like) entities. A novel eye-tracking paradigm was developed to assess infants’ sensitivity to communicative gestures by human and robotic informants. Infants from two age groups (5-9 months, n = 25; 10-15 months, n = 9) viewed a robotic or human informant pointing to locations where events would occur during experimental trials. Trials consisted of three phases: gesture, prediction, and event. Duration of looking (ms) to two areas of interest, target location and non-target location, was extracted. A series of paired t-tests revealed that only older infants in the human condition looked significantly longer to the target location during the prediction phase (p = .036). Future research is needed to tease apart what components of the robotic hand infants respond to differentially, and whether a robotic hand can be manipulated to increase infants’ sensitivity to social communication gestures executed by said robotic hand.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Face perception and recognition abilities develop throughout childhood and differences in viewing own-race and other-race faces have been found in both children (Hu et al., 2014) and adults (Blais et al., 2008). In addition, implicit biases have been found in children as young as six (Baron & Banaji, 2006) and have been found to influence face recognition (Bernstein, Young, & Hugenberg, 2007). The current study aimed to understand how gaze behaviors, implicit biases, and other-race experience contribute to the other-race effect and their developmental effects. Caucasian children’s (5-10 years of age) and young adults’ scanning behaviors were recorded during an old/new recognition task using Asian and Caucasian faces. Participants also completed an Implicit Association Test (IAT) and a race experience questionnaire. Results found an own-race bias in both children and adults. Only adult’s IAT scores were significantly different from zero, indicating an implicit bias. Participants had a greater number of eye to eye fixations for Caucasian faces, in comparison to Asian faces and eye to eye fixations were greater in adults during encoding phases. Additionally, increased nose looking times were observed with age. Central attention to the nose may be indicative of a more holistic viewing strategy implemented by adults and older children. Participants spent longer looking at the mouth of Asian faces during encoding and test for older children and adults, but younger children spent longer looking at own-race mouths during recognition.
Correlations between scanning patterns and implicit biases, and experience difference scores were also observed. Both social and perceptual factors seem to influence looking behaviors for own- and other-race faces and are undergoing changes during childhood.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Eye fixations of the face are normally directed towards either the eyes or the
mouth, however the proportions of gaze to either of these regions are dependent on
context. Previous studies of gaze behavior demonstrate a tendency to stare into a target’s
eyes, however no studies investigate the differences between when participants believe
they are engaging in a live interaction compared to knowingly watching a pre-recorded
video, a distinction that may contribute to studies of memory encoding. This study
examined differences in fixation behavior for when participants falsely believed they
were engaging in a real-time interaction over the internet (“Real-time stimulus”)
compared to when they knew they were watching a pre-recorded video (“Pre-recorded
stimulus”). Results indicated that participants fixated significantly longer towards the
eyes for the pre-recorded stimulus than for the real-time stimulus, suggesting that
previous studies which utilize pre-recorded videos may lack ecological validity.