Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 11 |
Descriptor
Language Processing | 11 |
Learning Processes | 11 |
Language Acquisition | 4 |
Syntax | 4 |
Task Analysis | 4 |
Correlation | 3 |
Cues | 3 |
English | 3 |
Vocabulary Development | 3 |
Adults | 2 |
Artificial Intelligence | 2 |
More ▼ |
Source
Cognitive Science | 11 |
Author
Jaeger, T. Florian | 2 |
Bergen, Benjamin | 1 |
Chang, Tyler | 1 |
Davis, Erin M. | 1 |
Fedzechkina, Maryia | 1 |
Fine, Alex B. | 1 |
Franck, Julie | 1 |
Hay, Jessica F. | 1 |
Hinano Iida | 1 |
Johns, Brendan T. | 1 |
Jones, Cameron | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Research | 11 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Schmid, Samuel; Saddy, Douglas; Franck, Julie – Cognitive Science, 2023
In this article, we explore the extraction of recursive nested structure in the processing of binary sequences. Our aim was to determine whether humans learn the higher-order regularities of a highly simplified input where only sequential-order information marks the hierarchical structure. To this end, we implemented a sequence generated by the…
Descriptors: Learning Processes, Sequential Learning, Grammar, Language Processing
Hinano Iida; Kimi Akita – Cognitive Science, 2024
Iconicity is a relationship of resemblance between the form and meaning of a sign. Compelling evidence from diverse areas of the cognitive sciences suggests that iconicity plays a pivotal role in the processing, memory, learning, and evolution of both spoken and signed language, indicating that iconicity is a general property of language. However,…
Descriptors: Japanese, Cognitive Science, Language Processing, Memory
Trott, Sean; Jones, Cameron; Chang, Tyler; Michaelov, James; Bergen, Benjamin – Cognitive Science, 2023
Humans can attribute beliefs to others. However, it is unknown to what extent this ability results from an innate biological endowment or from experience accrued through child development, particularly exposure to language describing others' mental states. We test the viability of the language exposure hypothesis by assessing whether models…
Descriptors: Models, Language Processing, Beliefs, Child Development
Messenger, Katherine – Cognitive Science, 2021
The implicit learning account of syntactic priming proposes that the same mechanism underlies syntactic priming and language development, providing a link between a child and adult language processing. The present experiment tested predictions of this account by comparing the persistence of syntactic priming effects in children and adults.…
Descriptors: Priming, Adults, Syntax, Preschool Children
Johns, Brendan T.; Mewhort, Douglas J. K.; Jones, Michael N. – Cognitive Science, 2019
Distributional models of semantics learn word meanings from contextual co-occurrence patterns across a large sample of natural language. Early models, such as LSA and HAL (Landauer & Dumais, 1997; Lund & Burgess, 1996), counted co-occurrence events; later models, such as BEAGLE (Jones & Mewhort, 2007), replaced counting co-occurrences…
Descriptors: Semantics, Learning Processes, Models, Prediction
Vong, Wai Keen; Lake, Brenden M. – Cognitive Science, 2022
In order to learn the mappings from words to referents, children must integrate co-occurrence information across individually ambiguous pairs of scenes and utterances, a challenge known as cross-situational word learning. In machine learning, recent multimodal neural networks have been shown to learn meaningful visual-linguistic mappings from…
Descriptors: Vocabulary Development, Cognitive Mapping, Problem Solving, Visual Aids
de Varda, Andrea Gregor; Strapparava, Carlo – Cognitive Science, 2022
The present paper addresses the study of non-arbitrariness in language within a deep learning framework. We present a set of experiments aimed at assessing the pervasiveness of different forms of non-arbitrary phonological patterns across a set of typologically distant languages. Different sequence-processing neural networks are trained in a set…
Descriptors: Learning Processes, Phonology, Language Patterns, Language Classification
Shoaib, Amber; Wang, Tianlin; Hay, Jessica F.; Lany, Jill – Cognitive Science, 2018
Infants are sensitive to statistical regularities (i.e., transitional probabilities, or TPs) relevant to segmenting words in fluent speech. However, there is debate about whether tracking TPs results in representations of possible words. Infants show preferential learning of sequences with high TPs (HTPs) as object labels relative to those with…
Descriptors: Infants, Italian, English, Native Language
Malone, Stephanie A.; Kalashnikova, Marina; Davis, Erin M. – Cognitive Science, 2016
Adults reason by exclusivity to identify the meanings of novel words. However, it is debated whether, like children, they extend this strategy to disambiguate other referential expressions (e.g., facts about objects). To further inform this debate, this study tested 41 adults on four conditions of a disambiguation task: label/label, fact/fact,…
Descriptors: Vocabulary Development, Task Analysis, Ambiguity (Semantics), Adults
Fedzechkina, Maryia; Newport, Elissa L.; Jaeger, T. Florian – Cognitive Science, 2017
Across languages of the world, some grammatical patterns have been argued to be more common than expected by chance. These are sometimes referred to as (statistical) "language universals." One such universal is the correlation between constituent order freedom and the presence of a case system in a language. Here, we explore whether this…
Descriptors: Grammar, Diachronic Linguistics, English, Old English
Fine, Alex B.; Jaeger, T. Florian – Cognitive Science, 2013
This study provides evidence for implicit learning in syntactic comprehension. By reanalyzing data from a syntactic priming experiment (Thothathiri & Snedeker, 2008), we find that the error signal associated with a syntactic prime influences comprehenders' subsequent syntactic expectations. This follows directly from error-based implicit learning…
Descriptors: Syntax, Priming, Language Processing, Error Analysis (Language)