Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 6 |
Descriptor
Artificial Languages | 6 |
Learning Processes | 6 |
Models | 6 |
Computational Linguistics | 4 |
Grammar | 3 |
Language Acquisition | 3 |
Linguistic Input | 3 |
Generalization | 2 |
Inferences | 2 |
Language Patterns | 2 |
Language Research | 2 |
More ▼ |
Author
Publication Type
Journal Articles | 5 |
Reports - Research | 4 |
Dissertations/Theses -… | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Aislinn Keogh; Simon Kirby; Jennifer Culbertson – Cognitive Science, 2024
General principles of human cognition can help to explain why languages are more likely to have certain characteristics than others: structures that are difficult to process or produce will tend to be lost over time. One aspect of cognition that is implicated in language use is working memory--the component of short-term memory used for temporary…
Descriptors: Language Variation, Learning Processes, Short Term Memory, Schemata (Cognition)
Carter, William Thomas Jeffrey – ProQuest LLC, 2019
Despite OT's success, opaque alternations prove difficult to capture with constraints, and some violate the theory's formal restrictions. Here, I propose a novel account of opacity drawing upon developments in psychology. Rather than one grammar, I propose a dual-system model with implicit and explicit mechanisms, a domain-specific OT-like system…
Descriptors: Grammar, Language Patterns, Models, Psychology
Radulescu, Silvia; Wijnen, Frank; Avrutin, Sergey – Language Learning and Development, 2020
From limited evidence, children track the regularities of their language impressively fast and they infer generalized rules that apply to novel instances. This study investigated what drives the inductive leap from memorizing specific items and statistical regularities to extracting abstract rules. We propose an innovative entropy model that…
Descriptors: Linguistic Input, Language Acquisition, Grammar, Learning Processes
Ouyang, Long; Boroditsky, Lera; Frank, Michael C. – Cognitive Science, 2017
Computational models have shown that purely statistical knowledge about words' linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that "postman" and "mailman" are semantically similar because they have quantitatively similar patterns of…
Descriptors: Semiotics, Computational Linguistics, Syntax, Semantics
Aslin, Richard N.; Newport, Elissa L. – Language Learning, 2014
In the past 15 years, a substantial body of evidence has confirmed that a powerful distributional learning mechanism is present in infants, children, adults and (at least to some degree) in nonhuman animals as well. The present article briefly reviews this literature and then examines some of the fundamental questions that must be addressed for…
Descriptors: Linguistic Input, Grammar, Language Research, Computational Linguistics
Hamrick, Phillip – Language Learning, 2014
Humans are remarkably sensitive to the statistical structure of language. However, different mechanisms have been proposed to account for such statistical sensitivities. The present study compared adult learning of syntax and the ability of two models of statistical learning to simulate human performance: Simple Recurrent Networks, which learn by…
Descriptors: Second Language Learning, Role, Syntax, Computational Linguistics