NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yu, Luodi; Zeng, Jiajing; Wang, Suiping; Zhang, Yang – Journal of Speech, Language, and Hearing Research, 2021
Purpose: This study aimed to examine whether abstract knowledge of word-level linguistic prosody is independent of or integrated with phonetic knowledge. Method: Event-related potential (ERP) responses were measured from 18 adult listeners while they listened to native and nonnative word-level prosody in speech and in nonspeech. The prosodic…
Descriptors: Brain Hemisphere Functions, Suprasegmentals, Phonetics, Intonation
Peer reviewed Peer reviewed
Direct linkDirect link
Stringer, Louise; Iverson, Paul – Journal of Speech, Language, and Hearing Research, 2019
Purpose: The intelligibility of an accent strongly depends on the specific talker-listener pairing. To explore the causes of this phenomenon, we investigated the relationship between acoustic-phonetic similarity and accent intelligibility across native (1st language) and nonnative (2nd language) talker-listener pairings. We also used online…
Descriptors: Pronunciation, Native Language, Auditory Perception, Acoustics
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Seung-yun; Van Lancker Sidtis, Diana – Journal of Speech, Language, and Hearing Research, 2016
Purpose: This study investigates the effects of left- and right-hemisphere damage (LHD and RHD) on the production of idiomatic or literal expressions utilizing acoustic analyses. Method: Twenty-one native speakers of Korean with LHD or RHD and in a healthy control (HC) group produced 6 ditropically ambiguous (idiomatic or literal) sentences in 2…
Descriptors: Korean, Figurative Language, Brain Hemisphere Functions, Acoustics
Peer reviewed Peer reviewed
Direct linkDirect link
Yu, Luodi; Fan, Yuebo; Deng, Zhizhou; Huang, Dan; Wang, Suiping; Zhang, Yang – Journal of Autism and Developmental Disorders, 2015
The present study investigated pitch processing in Mandarin-speaking children with autism using event-related potential measures. Two experiments were designed to test how acoustic, phonetic and semantic properties of the stimuli contributed to the neural responses for pitch change detection and involuntary attentional orienting. In comparison…
Descriptors: Intonation, Phonology, Autism, Pervasive Developmental Disorders
Peer reviewed Peer reviewed
Direct linkDirect link
Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A.; Marslen-Wilson, William D.; Tyler, Lorraine K. – Journal of Cognitive Neuroscience, 2011
Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning…
Descriptors: Word Recognition, Language Processing, Semantics, Brain Hemisphere Functions
Peer reviewed Peer reviewed
Direct linkDirect link
Osnes, Berge; Hugdahl, Kenneth; Hjelmervik, Helene; Specht, Karsten – Brain and Language, 2011
A common assumption is that phonetic sounds initiate unique processing in the superior temporal gyri and sulci (STG/STS). The anatomical areas subserving these processes are also implicated in the processing of non-phonetic stimuli such as music instrument sounds. The differential processing of phonetic and non-phonetic sounds was investigated in…
Descriptors: Stimuli, Phonetics, Brain Hemisphere Functions, Cognitive Processes
DeWitt, Iain D. J. – ProQuest LLC, 2013
Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…
Descriptors: Word Recognition, Brain Hemisphere Functions, Neurosciences, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Yoo, Sejin; Chung, Jun-Young; Jeon, Hyeon-Ae; Lee, Kyoung-Min; Kim, Young-Bo; Cho, Zang-Hee – Brain and Language, 2012
Speech production is inextricably linked to speech perception, yet they are usually investigated in isolation. In this study, we employed a verbal-repetition task to identify the neural substrates of speech processing with two ends active simultaneously using functional MRI. Subjects verbally repeated auditory stimuli containing an ambiguous vowel…
Descriptors: Auditory Stimuli, Articulation (Speech), Phonetics, Vowels
Peer reviewed Peer reviewed
Direct linkDirect link
Garcia-Sierra, Adrian; Ramirez-Esparza, Nairan; Silva-Pereyra, Juan; Siard, Jennifer; Champlin, Craig A. – Brain and Language, 2012
Event Related Potentials (ERPs) were recorded from Spanish-English bilinguals (N = 10) to test pre-attentive speech discrimination in two language contexts. ERPs were recorded while participants silently read magazines in English or Spanish. Two speech contrast conditions were recorded in each language context. In the "phonemic in English"…
Descriptors: Phonetics, Phonemics, Bilingualism, Spanish
Peer reviewed Peer reviewed
Direct linkDirect link
Boulenger, Veronique; Hoen, Michel; Jacquier, Caroline; Meunier, Fanny – Brain and Language, 2011
When listening to speech in everyday-life situations, our cognitive system must often cope with signal instabilities such as sudden breaks, mispronunciations, interfering noises or reverberations potentially causing disruptions at the acoustic/phonetic interface and preventing efficient lexical access and semantic integration. The physiological…
Descriptors: Sentences, Phonetics, Semantics, Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann – Journal of Cognitive Neuroscience, 2011
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich,…
Descriptors: Brain Hemisphere Functions, Diagnostic Tests, Speech Communication, Phonetics
Peer reviewed Peer reviewed
Direct linkDirect link
Gow, David W., Jr.; Segawa, Jennifer A. – Cognition, 2009
The inherent confound between the organization of articulation and the acoustic-phonetic structure of the speech signal makes it exceptionally difficult to evaluate the competing claims of motor and acoustic-phonetic accounts of how listeners recognize coarticulated speech. Here we use Granger causation analysis of high spatiotemporal resolution…
Descriptors: Speech Communication, Articulation (Speech), Phonetics, Medicine
Peer reviewed Peer reviewed
Direct linkDirect link
Francis, Alexander L.; Driscoll, Courtney – Brain and Language, 2006
We examined the effect of perceptual training on a well-established hemispheric asymmetry in speech processing. Eighteen listeners were trained to use a within-category difference in voice onset time (VOT) to cue talker identity. Successful learners (n = 8) showed faster response times for stimuli presented only to the left ear than for those…
Descriptors: Auditory Perception, Time, Cues, Auditory Training
Peer reviewed Peer reviewed
Direct linkDirect link
Hickok, Gregory; Poeppel, David – Cognition, 2004
Despite intensive work on language-brain relations, and a fairly impressive accumulation of knowledge over the last several decades, there has been little progress in developing large-scale models of the functional anatomy of language that integrate neuropsychological, neuroimaging, and psycholinguistic data. Drawing on relatively recent…
Descriptors: Language Processing, Psycholinguistics, Neuropsychology, Speech Communication