Eye -- Movements

Model
Digital Document
Publisher
Florida Atlantic University
Description
Eye fixations of the face are normally directed towards either the eyes or the
mouth, however the proportions of gaze to either of these regions are dependent on
context. Previous studies of gaze behavior demonstrate a tendency to stare into a target’s
eyes, however no studies investigate the differences between when participants believe
they are engaging in a live interaction compared to knowingly watching a pre-recorded
video, a distinction that may contribute to studies of memory encoding. This study
examined differences in fixation behavior for when participants falsely believed they
were engaging in a real-time interaction over the internet (“Real-time stimulus”)
compared to when they knew they were watching a pre-recorded video (“Pre-recorded
stimulus”). Results indicated that participants fixated significantly longer towards the
eyes for the pre-recorded stimulus than for the real-time stimulus, suggesting that
previous studies which utilize pre-recorded videos may lack ecological validity.
Model
Digital Document
Publisher
Florida Atlantic University
Description
In this study, we investigated what informational aspects of faces could account
for the ability to match an individual’s face to their voice, using only static images. In
each of the first six experiments, we simultaneously presented one voice recording along
with two manipulated images of faces (e.g. top half of the face, bottom half of the face,
etc.), a target face and distractor face. The participant’s task was to choose which of the
images they thought belonged to the same individual as the voice recording. The voices
remained un-manipulated. In Experiment 7 we used eye tracking in order to determine
which informational aspects of the model’s faces people are fixating while performing
the matching task, as compared to where they fixate when there are no immediate task
demands. We presented a voice recording followed by two static images, a target and
distractor face. The participant’s task was to choose which of the images they thought
belonged to the same individual as the voice recording, while we tracked their total
fixation duration. In the no-task, passive viewing condition, we presented a male’s voice
recording followed sequentially by two static images of female models, or vice versa, counterbalanced across participants. Participant’s results revealed significantly better
than chance performance in the matching task when the images presented were the
bottom half of the face, the top half of the face, the images inverted upside down, when
presented with a low pass filtered image of the face, and when the inner face was
completely blurred out. In Experiment 7 we found that when completing the matching
task, the time spent looking at the outer area of the face increased, as compared to when
the images and voice recordings were passively viewed. When the images were passively
viewed, the time spend looking at the inner area of the face increased. We concluded that
the inner facial features (i.e. eyes, nose, and mouth) are not necessary informational
aspects of the face which allow for the matching ability. The ability likely relies on global
features such as the face shape and size.