NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Aislinn Keogh; Simon Kirby; Jennifer Culbertson – Cognitive Science, 2024
General principles of human cognition can help to explain why languages are more likely to have certain characteristics than others: structures that are difficult to process or produce will tend to be lost over time. One aspect of cognition that is implicated in language use is working memory--the component of short-term memory used for temporary…
Descriptors: Language Variation, Learning Processes, Short Term Memory, Schemata (Cognition)
Peer reviewed Peer reviewed
Direct linkDirect link
Trott, Sean; Jones, Cameron; Chang, Tyler; Michaelov, James; Bergen, Benjamin – Cognitive Science, 2023
Humans can attribute beliefs to others. However, it is unknown to what extent this ability results from an innate biological endowment or from experience accrued through child development, particularly exposure to language describing others' mental states. We test the viability of the language exposure hypothesis by assessing whether models…
Descriptors: Models, Language Processing, Beliefs, Child Development
Peer reviewed Peer reviewed
Direct linkDirect link
Cruz Blandón, María Andrea; Cristia, Alejandrina; Räsänen, Okko – Cognitive Science, 2023
Computational models of child language development can help us understand the cognitive underpinnings of the language learning process, which occurs along several linguistic levels at once (e.g., prosodic and phonological). However, in light of the replication crisis, modelers face the challenge of selecting representative and consolidated infant…
Descriptors: Meta Analysis, Infants, Language Acquisition, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Johns, Brendan T.; Mewhort, Douglas J. K.; Jones, Michael N. – Cognitive Science, 2019
Distributional models of semantics learn word meanings from contextual co-occurrence patterns across a large sample of natural language. Early models, such as LSA and HAL (Landauer & Dumais, 1997; Lund & Burgess, 1996), counted co-occurrence events; later models, such as BEAGLE (Jones & Mewhort, 2007), replaced counting co-occurrences…
Descriptors: Semantics, Learning Processes, Models, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Stevens, Jon Scott; Gleitman, Lila R.; Trueswell, John C.; Yang, Charles – Cognitive Science, 2017
We evaluate here the performance of four models of cross-situational word learning: two global models, which extract and retain multiple referential alternatives from each word occurrence; and two local models, which extract just a single referent from each occurrence. One of these local models, dubbed "Pursuit," uses an associative…
Descriptors: Semantics, Associative Learning, Probability, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Ouyang, Long; Boroditsky, Lera; Frank, Michael C. – Cognitive Science, 2017
Computational models have shown that purely statistical knowledge about words' linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that "postman" and "mailman" are semantically similar because they have quantitatively similar patterns of…
Descriptors: Semiotics, Computational Linguistics, Syntax, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Martin, Andrew; Peperkamp, Sharon; Dupoux, Emmanuel – Cognitive Science, 2013
Before the end of the first year of life, infants begin to lose the ability to perceive distinctions between sounds that are not phonemic in their native language. It is typically assumed that this developmental change reflects the construction of language-specific phoneme categories, but how these categories are learned largely remains a mystery.…
Descriptors: Phonemes, Language Acquisition, Infants, Learning Processes