NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Aislinn Keogh; Simon Kirby; Jennifer Culbertson – Cognitive Science, 2024
General principles of human cognition can help to explain why languages are more likely to have certain characteristics than others: structures that are difficult to process or produce will tend to be lost over time. One aspect of cognition that is implicated in language use is working memory--the component of short-term memory used for temporary…
Descriptors: Language Variation, Learning Processes, Short Term Memory, Schemata (Cognition)
Peer reviewed Peer reviewed
Direct linkDirect link
Trott, Sean; Jones, Cameron; Chang, Tyler; Michaelov, James; Bergen, Benjamin – Cognitive Science, 2023
Humans can attribute beliefs to others. However, it is unknown to what extent this ability results from an innate biological endowment or from experience accrued through child development, particularly exposure to language describing others' mental states. We test the viability of the language exposure hypothesis by assessing whether models…
Descriptors: Models, Language Processing, Beliefs, Child Development
Peer reviewed Peer reviewed
Direct linkDirect link
Cruz Blandón, María Andrea; Cristia, Alejandrina; Räsänen, Okko – Cognitive Science, 2023
Computational models of child language development can help us understand the cognitive underpinnings of the language learning process, which occurs along several linguistic levels at once (e.g., prosodic and phonological). However, in light of the replication crisis, modelers face the challenge of selecting representative and consolidated infant…
Descriptors: Meta Analysis, Infants, Language Acquisition, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Johns, Brendan T.; Mewhort, Douglas J. K.; Jones, Michael N. – Cognitive Science, 2019
Distributional models of semantics learn word meanings from contextual co-occurrence patterns across a large sample of natural language. Early models, such as LSA and HAL (Landauer & Dumais, 1997; Lund & Burgess, 1996), counted co-occurrence events; later models, such as BEAGLE (Jones & Mewhort, 2007), replaced counting co-occurrences…
Descriptors: Semantics, Learning Processes, Models, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Beekhuizen, Barend; Stevenson, Suzanne – Cognitive Science, 2018
We explore the following two cognitive questions regarding crosslinguistic variation in lexical semantic systems: Why are some linguistic categories--that is, the associations between a term and a portion of the semantic space--harder to learn than others? How does learning a language-specific set of lexical categories affect processing in that…
Descriptors: Color, Visual Discrimination, Semantics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Paape, Dario; Avetisyan, Serine; Lago, Sol; Vasishth, Shravan – Cognitive Science, 2021
We present computational modeling results based on a self-paced reading study investigating number attraction effects in Eastern Armenian. We implement three novel computational models of agreement attraction in a Bayesian framework and compare their predictive fit to the data using k-fold cross-validation. We find that our data are better…
Descriptors: Computational Linguistics, Indo European Languages, Grammar, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Kastner, Itamar; Adriaans, Frans – Cognitive Science, 2018
Statistical learning is often taken to lie at the heart of many cognitive tasks, including the acquisition of language. One particular task in which probabilistic models have achieved considerable success is the segmentation of speech into words. However, these models have mostly been tested against English data, and as a result little is known…
Descriptors: Role, Phonemes, Contrastive Linguistics, English
Peer reviewed Peer reviewed
Direct linkDirect link
Brouwer, Harm; Crocker, Matthew W.; Venhuizen, Noortje J.; Hoeks, John C. J. – Cognitive Science, 2017
Ten years ago, researchers using event-related brain potentials (ERPs) to study language comprehension were puzzled by what looked like a "Semantic Illusion": Semantically anomalous, but structurally well-formed sentences did not affect the N400 component--traditionally taken to reflect semantic integration--but instead produced a P600…
Descriptors: Diagnostic Tests, Brain Hemisphere Functions, Language Processing, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Çöltekin, Çagri – Cognitive Science, 2017
This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic…
Descriptors: Speech Communication, Phonemes, Prediction, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Stevens, Jon Scott; Gleitman, Lila R.; Trueswell, John C.; Yang, Charles – Cognitive Science, 2017
We evaluate here the performance of four models of cross-situational word learning: two global models, which extract and retain multiple referential alternatives from each word occurrence; and two local models, which extract just a single referent from each occurrence. One of these local models, dubbed "Pursuit," uses an associative…
Descriptors: Semantics, Associative Learning, Probability, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Lau, Jey Han; Clark, Alexander; Lappin, Shalom – Cognitive Science, 2017
The question of whether humans represent grammatical knowledge as a binary condition on membership in a set of well-formed sentences, or as a probabilistic property has been the subject of debate among linguists, psychologists, and cognitive scientists for many decades. Acceptability judgments present a serious problem for both classical binary…
Descriptors: Grammar, Probability, Sentences, Language Research
Peer reviewed Peer reviewed
Direct linkDirect link
Ouyang, Long; Boroditsky, Lera; Frank, Michael C. – Cognitive Science, 2017
Computational models have shown that purely statistical knowledge about words' linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that "postman" and "mailman" are semantically similar because they have quantitatively similar patterns of…
Descriptors: Semiotics, Computational Linguistics, Syntax, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Rafferty, Anna N.; Griffiths, Thomas L.; Klein, Dan – Cognitive Science, 2014
Analyzing the rate at which languages change can clarify whether similarities across languages are solely the result of cognitive biases or might be partially due to descent from a common ancestor. To demonstrate this approach, we use a simple model of language evolution to mathematically determine how long it should take for the distribution over…
Descriptors: Diachronic Linguistics, Models, Evolution, Language Acquisition
Peer reviewed Peer reviewed
Direct linkDirect link
Phillips, Lawrence; Pearl, Lisa – Cognitive Science, 2015
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's "cognitive plausibility." We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition…
Descriptors: Language Acquisition, Models, Computational Linguistics, Credibility
Peer reviewed Peer reviewed
Direct linkDirect link
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon – Cognitive Science, 2015
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given…
Descriptors: Grammar, Natural Language Processing, Computer Mediated Communication, Graphs
Previous Page | Next Page »
Pages: 1  |  2