Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 3 |
Descriptor
Artificial Languages | 4 |
Models | 4 |
Computational Linguistics | 2 |
Grammar | 2 |
Language Acquisition | 2 |
Learning Processes | 2 |
Bayesian Statistics | 1 |
Behavior Patterns | 1 |
Bias | 1 |
Brain Hemisphere Functions | 1 |
Classification | 1 |
More ▼ |
Source
Cognitive Science | 4 |
Author
Aislinn Keogh | 1 |
Boroditsky, Lera | 1 |
Culbertson, Jennifer | 1 |
Forkstam, Christian | 1 |
Frank, Michael C. | 1 |
Ingvar, Martin | 1 |
Jennifer Culbertson | 1 |
Ouyang, Long | 1 |
Petersson, Karl Magnus | 1 |
Simon Kirby | 1 |
Smolensky, Paul | 1 |
More ▼ |
Publication Type
Journal Articles | 4 |
Reports - Research | 4 |
Education Level
Audience
Location
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Aislinn Keogh; Simon Kirby; Jennifer Culbertson – Cognitive Science, 2024
General principles of human cognition can help to explain why languages are more likely to have certain characteristics than others: structures that are difficult to process or produce will tend to be lost over time. One aspect of cognition that is implicated in language use is working memory--the component of short-term memory used for temporary…
Descriptors: Language Variation, Learning Processes, Short Term Memory, Schemata (Cognition)
Ouyang, Long; Boroditsky, Lera; Frank, Michael C. – Cognitive Science, 2017
Computational models have shown that purely statistical knowledge about words' linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that "postman" and "mailman" are semantically similar because they have quantitatively similar patterns of…
Descriptors: Semiotics, Computational Linguistics, Syntax, Semantics
Culbertson, Jennifer; Smolensky, Paul – Cognitive Science, 2012
In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language-learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners' input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized…
Descriptors: Models, Bayesian Statistics, Artificial Languages, Language Acquisition
Petersson, Karl Magnus; Forkstam, Christian; Ingvar, Martin – Cognitive Science, 2004
In the present study, using event-related functional magnetic resonance imaging, we investigated a group of participants on a grammaticality classification task after they had been exposed to well-formed consonant strings generated from an artificial regular grammar. We used an implicit acquisition paradigm in which the participants were exposed…
Descriptors: Grammar, Classification, Models, Language Processing