Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Speech Communication | 3 |
Suprasegmentals | 3 |
Cues | 2 |
Articulation (Speech) | 1 |
Auditory Perception | 1 |
Auditory Stimuli | 1 |
Experiments | 1 |
Identification | 1 |
Language Processing | 1 |
Phonology | 1 |
Psychological Patterns | 1 |
More ▼ |
Source
Journal of Experimental… | 3 |
Author
Hill, Harold | 1 |
Kamachi, Miyuki | 1 |
Lander, Karen | 1 |
Mattys, Sven L. | 1 |
Nygaard, Lynne C. | 1 |
Queen, Jennifer S. | 1 |
Vatikiotis-Bateson, Eric | 1 |
Publication Type
Journal Articles | 3 |
Reports - Research | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Nygaard, Lynne C.; Queen, Jennifer S. – Journal of Experimental Psychology: Human Perception and Performance, 2008
The present study investigated the role of emotional tone of voice in the perception of spoken words. Listeners were presented with words that had either a happy, sad, or neutral meaning. Each word was spoken in a tone of voice (happy, sad, or neutral) that was congruent, incongruent, or neutral with respect to affective meaning, and naming…
Descriptors: Semantics, Psychological Patterns, Auditory Perception, Suprasegmentals
Lander, Karen; Hill, Harold; Kamachi, Miyuki; Vatikiotis-Bateson, Eric – Journal of Experimental Psychology: Human Perception and Performance, 2007
Recent studies have shown that the face and voice of an unfamiliar person can be matched for identity. Here the authors compare the relative effects of changing sentence content (what is said) and sentence manner (how it is said) on matching identity between faces and voices. A change between speaking a sentence as a statement and as a question…
Descriptors: Identification, Speech Communication, Cues, Sentences
Mattys, Sven L. – Journal of Experimental Psychology: Human Perception and Performance, 2004
Although word stress has been hailed as a powerful speech-segmentation cue, the results of 5 cross-modal fragment priming experiments revealed limitations to stress-based segmentation. Specifically, the stress pattern of auditory primes failed to have any effect on the lexical decision latencies to related visual targets. A determining factor was…
Descriptors: Cues, Phonology, Articulation (Speech), Suprasegmentals