Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 13 |
Descriptor
Auditory Perception | 17 |
Auditory Stimuli | 17 |
Oral Language | 17 |
Acoustics | 5 |
Foreign Countries | 5 |
Speech Communication | 5 |
Visual Stimuli | 5 |
Word Recognition | 5 |
Language Processing | 4 |
Listening | 4 |
Phonology | 4 |
More ▼ |
Source
Author
Berg, Jeffrey J. | 1 |
Bhatara, Anjali | 1 |
Bihon, Tressa | 1 |
Bressmann, Tim | 1 |
Bricklemyer, Jodie | 1 |
Brock, Jon | 1 |
Brown, Hunter E. | 1 |
Brown, Violet A. | 1 |
Chen, Xin | 1 |
Cutler, Anne | 1 |
Demuth, Katherine | 1 |
More ▼ |
Publication Type
Journal Articles | 15 |
Reports - Research | 12 |
Dissertations/Theses -… | 2 |
Reports - Descriptive | 2 |
Information Analyses | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 2 |
Elementary Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Israel | 3 |
Australia | 1 |
Germany | 1 |
Minnesota | 1 |
Netherlands | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Raven Progressive Matrices | 1 |
Wechsler Adult Intelligence… | 1 |
What Works Clearinghouse Rating
Elizabeth Pierotti – ProQuest LLC, 2024
The process of spoken word recognition is influenced by both bottom-up sensory information and top-down cognitive information. These cues are used to process the phonological and semantic representations of speech. Several studies have used EEG/ERPs to study the neural mechanisms of children's spoken word recognition, but less is known about the…
Descriptors: Word Recognition, Cognitive Processes, Cues, Oral Language
Hadeer Derawi; Eva Reinisch; Yafit Gabay – Journal of Speech, Language, and Hearing Research, 2023
Background: To overcome variability in spoken language, listeners utilize various types of context information for disambiguating speech sounds. Context effects have been shown to be affected by cognitive load. However, previous results are mixed regarding the influence of cognitive load on the use of context information in speech perception.…
Descriptors: Cognitive Processes, Acoustics, Auditory Perception, Attention Deficit Hyperactivity Disorder
Viebahn, Malte C. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2021
Studies have demonstrated that listeners can retain detailed voice-specific acoustic information about spoken words in memory. A central question is when such information influences lexical processing. According to episodic models of the mental lexicon, voice-specific details influence word recognition immediately during online speech perception.…
Descriptors: Reaction Time, Priming, Acoustics, Word Recognition
Suzuki, Shungo; Kormos, Judit; Uchihara, Takumi – Modern Language Journal, 2021
Listener-based judgements of fluency play an important role in second language (L2) communication contexts and in L2 assessment. Accordingly, our meta-analysis examined the relationship between different aspects of utterance fluency and listener-based judgements of perceived fluency by analyzing primary studies reporting correlation coefficients…
Descriptors: Second Language Learning, Language Fluency, Speech Communication, Oral Language
Gold, Rinat; Segal, Osnat – Language Learning and Development, 2020
The "bouba-kiki effect" refers to the correspondence between arbitrary visual and auditory stimuli. Previous studies have demonstrated that neurodevelopmental conditions and sensory impairment affect subjects' performance on the bouba-kiki task. This study examined the bouba-kiki effect in participants with severe-to-profound hearing…
Descriptors: Visual Stimuli, Auditory Stimuli, Correlation, Neurological Organization
de Boer, Gillian; Bressmann, Tim – Journal of Speech, Language, and Hearing Research, 2017
Purpose: This study explored the role of auditory feedback in the regulation of oral-nasal balance in speech. Method: Twenty typical female speakers wore a Nasometer 6450 (KayPentax) headset and headphones while continuously repeating a sentence with oral and nasal sounds. Oral-nasal balance was quantified with nasalance scores. The signals from 2…
Descriptors: Auditory Perception, Feedback (Response), Measurement Equipment, Auditory Stimuli
Icht, Michal; Mama, Yaniv; Taitelbaum-Swead, Riki – Journal of Speech, Language, and Hearing Research, 2020
Purpose: The aim of this study was to test whether a group of older postlingually deafened cochlear implant users (OCIs) use similar verbal memory strategies to those used by older normal-hearing adults (ONHs). Verbal memory functioning was assessed in the visual and auditory modalities separately, enabling us to eliminate possible modality-based…
Descriptors: Deafness, Assistive Technology, Verbal Communication, Older Adults
Henny Yeung, H.; Bhatara, Anjali; Nazzi, Thierry – Cognitive Science, 2018
Perceptual grouping is fundamental to many auditory processes. The Iambic-Trochaic Law (ITL) is a default grouping strategy, where rhythmic alternations of duration are perceived iambically (weak-strong), while alternations of intensity are perceived trochaically (strong-weak). Some argue that the ITL is experience dependent. For instance, French…
Descriptors: Language Rhythm, Phonology, Acoustics, French
Strand, Julia F.; Brown, Violet A.; Brown, Hunter E.; Berg, Jeffrey J. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2018
To understand spoken language, listeners combine acoustic-phonetic input with expectations derived from context (Dahan & Magnuson, 2006). Eye-tracking studies on semantic context have demonstrated that the activation levels of competing lexical candidates depend on the relative strengths of the bottom-up input and top-down expectations (cf.…
Descriptors: Grammar, Listening Comprehension, Oral Language, Eye Movements
Dube, Sithembinkosi; Kung, Carmen; Brock, Jon; Demuth, Katherine – Language Acquisition: A Journal of Developmental Linguistics, 2019
Recent ERP research with adults has shown that the online processing of subject-verb (S-V) agreement violations is mediated by the relative perceptual salience of the violation (Dube et al. 2016). These findings corroborate infant perception research, which has also shown that perceptual salience influences infants' sensitivity to grammatical…
Descriptors: Language Processing, Language Acquisition, Brain Hemisphere Functions, Grammar
Hwang, So-One K. – ProQuest LLC, 2011
This dissertation explores the hypothesis that language processing proceeds in "windows" that correspond to representational units, where sensory signals are integrated according to time-scales that correspond to the rate of the input. To investigate universal mechanisms, a comparison of signed and spoken languages is necessary. Underlying the…
Descriptors: Comprehension, Language Processing, Testing, Morphemes
Morton, J. Bruce; Trehub, Sandra E. – Psychology of Music, 2007
Songs convey emotion by means of expressive performance cues (e.g. pitch level, tempo, vocal tone) and lyrics. Although children can interpret both types of cues, it is unclear whether they would focus on performance cues or salient verbal cues when judging the feelings of a singer. To investigate this question, we had 5- to 10-year-old children…
Descriptors: Cues, Singing, Emotional Response, Children

Mullennix, John W.; Bihon, Tressa; Bricklemyer, Jodie; Gaston, Jeremy; Keener, Jessica M. – Language and Speech, 2002
Effects of variation from stimulus to stimulus in emotional tone of voice on speech perception were examined through a series of perceptual experiments. Stimuli were recorded from human speakers who produced utterances in tones of voice designed to convey affective information. Stimuli varying in talker voice and emotional the where then presented…
Descriptors: Affective Behavior, Auditory Perception, Auditory Stimuli, Oral Language

Dupoux, Emmanuel; Pallier, Christophe; Kakehi, Kazuhiko; Mehler, Jacques – Language and Cognitive Processes, 2001
When presented with stimuli that contain illegal consonant clusters, Japanese listeners tend to hear an illusory vowel that makes their perception conform to the phonotactics of the language. Assesses an alternate hypothesis that this illusion is due to a top-down lexical effect. (Author/VWL)
Descriptors: Auditory Perception, Auditory Stimuli, Cognitive Processes, Consonants
Chen, Xin; Striano, Tricia; Rakoczy, Hannes – Developmental Science, 2004
Twenty-five newborn infants were tested for auditory-oral matching behavior when presented with the consonant sound /m/ and the vowel sound /a/--a precursor behavior to vocal imitation. Auditory-oral matching behavior by the infant was operationally defined as showing the mouth movement appropriate for producing the model sound just heard (mouth…
Descriptors: Vowels, Imitation, Neonates, Young Children
Previous Page | Next Page ยป
Pages: 1 | 2