Minar, Nicholas J.

Relationships
Member of: Graduate College
Person Preferred Name
Minar, Nicholas J.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Lewkowicz & Hansen-Tift found that when 4-month-old infants see and hear a person
talking, they look more at her eyes but that 8- and 10-mo infants look more at her mouth. The
developmental attentional shift to the mouth reflects infants’ growing interest in speech.
Attention to the mouth enables infants to gain access to redundant and maximally salient
audiovisual cues which then facilitate speech and language acquisition.
We investigated the separate role of mouth movement and vocalization cues in the attentional
shift from a talker’s eyes to the talker’s mouth. In 3 experiments, we used an eye-tracker to
measure the proportion of attention infants, 4-, 8-, and 10-mo, allocate to the eyes and mouth of a
static/silent face, a static/talking face, and a silently talking face. We found that when infants see
a static person, they attend to the eyes. Lewkowicz & Hansen-Tift found that when infants see
and hear a person talking, 4-mos look at the eyes whereas 8- and 10-mos look at the mouth.
When infants see a silently talking person, only 10-mos look at the mouth. These findings
demonstrate that the shift from the eyes to the mouth is mediated by three factors: dynamic
visual speech cues, an emerging interest in speech, and the redundancy of audiovisual speech.
Thus, younger infants are not interested in speech so they focus on the eyes, whereas older
infants become interested in speech, shifting their focus to the mouth, but initially at 8 m, this
shift requires that speech be multisensory.
Model
Digital Document
Publisher
Florida Atlantic University
Description
Our everyday world consists of people and objects that are usually specified by dynamic and concurrent auditory and visual attributes, which is known to increase perceptual salience and, therefore, facilitate learning and discrimination in infancy. Interestingly, early experience with faces and vocalizations has two seemingly opposite effects during the first year of life, 1) it enables infants to gradually acquire perceptual expertise for the faces and vocalizations of their own race and, 2) it narrows their ability to discriminate the faces of other-race faces (Kelly et al., 2007). It is not known whether multisensory redundancy might help older infants overcome the other-race effect reported in previous studies. The current project investigated infant discrimination of dynamic and vocalizing other-race faces in younger and older infants using habituation and eye-tracking methodologies. Experiment 1 examined 4-6 and 10-12-month-old infants' ability to discriminate either a native or non-native face articulating the syllable /a/. Results showed that both the 4-6- and the 10-12-month-olds successfully discriminated the faces,regardless of whether they were same- or other-race faces. Experiment 2 investigated the contribution of auditory speech cues by repeating Experiment 1 but in silence. Results showed that only the 10-12-month-olds tested with native-race faces successfully discriminated them. Experiment 3 investigated whether it was speech per se or sound in general that facilitated discrimination of the other-race faces in Experiment 1 by presenting a synchronous, computer-generated "boing" sound instead of audible speech cues. Results indicated that the 4-6-month olds discriminated both types of faces but that 10-12-month-olds only discriminated own-race faces. These results indicate that auditory cues, along with dynamic visual cues, can help infants overcome the effects of previously reported narrowing and facilitate discrimination of other-race static, silent faces. Critically, our results show that older infants can overcome the other race-effect when dynamic faces are accompanied by speech but not when they are accompanied by non- speech cues. Overall, a generalized auditory facilitation effect was found as a result of multisensory speech. Moreover, our findings suggest that infants' ability to process other- race faces following perceptual narrowing is more plastic than previously thought.
Model
Digital Document
Publisher
Florida Atlantic University Digital Library
Description
Non-adjacent statistical relations are an important class of sequential structure because they aid in the acquisition of syntax and, thus, language. Previous work has demonstrated that 15-month-old infants are sensitive to distant sequential relations but that these types of relations are difficult to learn. Importantly, it is not known whether the ability to learn non-adjacent statistical relations is based on a domain-specific or domain-general pattern-learning mechanism. We examined the domain-generality of this ability in separate groups of 10- and 12-month-old infants in two experiments utilizing the habituation/test procedure.
<br>
<br>Experiment 1 habituated infants to sequences of five moving/sounding arbitrary shapes and sounds. The sequences contained two target elements that were always separated by a non-target element. Results indicated that neither age group displayed response recovery when the target elements were switched. Experiment 2 simplified the task by using sequences that were three elements in length e.g., ABC and DBE. During the test trials, the last element from the two unique pairings was again switched e.g., ABE and DBC. Results indicated that only the 12-month-olds detected a change in the sequence [t 48 1.76, p 0.05].
<br>
<br>These results indicate that infants’ sensitivity to multisensory non-adjacent statistical dependencies is limited to simple 3 element sequences rather than complex 5 element sequences. Our findings also indicate that infants as young as 12 months of age can learn non-adjacent sequential relations embedded within arbitrary audiovisual sequences, suggesting that this critical ability is domain-general in nature.
<br>
Model
Digital Document
Publisher
Florida Atlantic University
Description
Studies have shown that human infants can integrate the multisensory attributes of their world and, thus, have coherent perceptual experiences. Multisensory attributes can either specify non-arbitrary (e.g., amodal stimulus/event properties and typical relations) or arbitrary properties (e.g., visuospatial height and pitch). The goal of the current study was to expand on Walker et al.'s (2010) finding that 4-month-old infants looked longer at rising/falling objects when accompanied by rising/falling pitch than when accompanied by falling/rising pitch. We did so by conducting two experiments. In Experiment 1, our procedure matched Walker et al.'s (2010) single screen presentation while in Experiment 2 we used a multisensory paired-preference procedure. Additionally, we examined infants' responsiveness to these synesthetic-like events at multiple ages throughout development (four, six, and 12 months of age). ... In sum, our findings indicate that the ability to match changing visuospatial height with rising/falling pitch does not emerge until the end of the first year of life and throw into doubt Walker et al.'s (2010) claim that 4-month-old infants perceive audiovisual synesthetic relations in a manner similar to adults.