NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor M. Mezaraups; David L. Gilden – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2024
The basic timescales governing animal life are generally determined by body size. Pauses in naturally occurring human speech were investigated to determine if pause timescales are also sensitive to body size. Reported is an analysis of pause duration allometry in recorded interviews of 61 athletes. Pauses were divided into three classes based on…
Descriptors: Speech, Auditory Perception, Body Composition, Time
Peer reviewed Peer reviewed
Direct linkDirect link
Viebahn, Malte C. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2021
Studies have demonstrated that listeners can retain detailed voice-specific acoustic information about spoken words in memory. A central question is when such information influences lexical processing. According to episodic models of the mental lexicon, voice-specific details influence word recognition immediately during online speech perception.…
Descriptors: Reaction Time, Priming, Acoustics, Word Recognition
Peer reviewed Peer reviewed
Direct linkDirect link
Katya Petrova; Kyle Jasmin; Kazuya Saito; Adam T. Tierney – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2023
Languages differ in the importance of acoustic dimensions for speech categorization. This poses a potential challenge for second language (L2) learners, and the extent to which adult L2 learners can acquire new perceptual strategies for speech categorization remains unclear. This study investigated the effects of extensive English L2 immersion on…
Descriptors: Second Language Learning, Second Language Instruction, Suprasegmentals, Mandarin Chinese
Peer reviewed Peer reviewed
Direct linkDirect link
Strand, Julia F.; Brown, Violet A.; Brown, Hunter E.; Berg, Jeffrey J. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2018
To understand spoken language, listeners combine acoustic-phonetic input with expectations derived from context (Dahan & Magnuson, 2006). Eye-tracking studies on semantic context have demonstrated that the activation levels of competing lexical candidates depend on the relative strengths of the bottom-up input and top-down expectations (cf.…
Descriptors: Grammar, Listening Comprehension, Oral Language, Eye Movements