NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Méary, David; Jaggie, Carole; Pascalis, Olivier – Language Learning, 2018
Visual and auditory information jointly contribute to face categorization processes in humans, and gender is a socially relevant multisensory category specified by faces and voices that is detected early in infancy. We used an eye tracker to study how gender coherence in audio and visual modalities influence face scanning in 9- to 12-month-old…
Descriptors: Infants, Eye Movements, Gender Differences, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Godfroid, Aline; Lin, Chin-Hsi; Ryu, Catherine – Language Learning, 2017
Multimodal approaches have been shown to be effective for many learning tasks. In this study, we compared the effectiveness of five multimodal methods for second language (L2) Mandarin tone perception training: three single-cue methods (number, pitch contour, color) and two dual-cue methods (color and number, color and pitch contour). A total of…
Descriptors: Color, Intonation, Linguistic Input, Pretests Posttests
Peer reviewed Peer reviewed
Direct linkDirect link
Heikkilä, Jenni; Tiippana, Kaisa; Loberg, Otto; Leppänen, Paavo H. T. – Language Learning, 2018
Seeing articulatory gestures enhances speech perception. Perception of auditory speech can even be changed by incongruent visual gestures, which is known as the McGurk effect (e.g., dubbing a voice saying /mi/ onto a face articulating /ni/, observers often hear /ni/). In children, the McGurk effect is weaker than in adults, but no previous…
Descriptors: Articulation (Speech), Audiovisual Aids, Brain Hemisphere Functions, Diagnostic Tests