NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tessler, Michael Henry; Goodman, Noah D. – Cognitive Science, 2022
The meanings of natural language utterances depend heavily on context. Yet, what counts as context is often only implicit in conversation. The utterance "it's warm outside" signals that the temperature outside is relatively high, but the temperature could be high relative to a number of different "comparison classes": other…
Descriptors: Language Processing, Speech, Context Effect, Form Classes (Languages)
Peer reviewed Peer reviewed
Direct linkDirect link
Lloyd, Kevin; Sanborn, Adam; Leslie, David; Lewandowsky, Stephan – Cognitive Science, 2019
Algorithms for approximate Bayesian inference, such as those based on sampling (i.e., Monte Carlo methods), provide a natural source of models of how people may deal with uncertainty with limited cognitive resources. Here, we consider the idea that individual differences in working memory capacity (WMC) may be usefully modeled in terms of the…
Descriptors: Short Term Memory, Bayesian Statistics, Cognitive Ability, Individual Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Danileiko, Irina; Lee, Michael D. – Cognitive Science, 2018
We apply the "wisdom of the crowd" idea to human category learning, using a simple approach that combines people's categorization decisions by taking the majority decision. We first show that the aggregated crowd category learning behavior found by this method performs well, learning categories more quickly than most or all individuals…
Descriptors: Group Experience, Classification, Learning Processes, Participative Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D. – Cognitive Science, 2018
Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…
Descriptors: Classification, Conditioning, Inferences, Novelty (Stimulus Dimension)
Peer reviewed Peer reviewed
Direct linkDirect link
Jenkins, Gavin W.; Samuelson, Larissa K.; Smith, Jodi R.; Spencer, John P. – Cognitive Science, 2015
It is unclear how children learn labels for multiple overlapping categories such as "Labrador," "dog," and "animal." Xu and Tenenbaum (2007a) suggested that learners infer correct meanings with the help of Bayesian inference. They instantiated these claims in a Bayesian model, which they tested with preschoolers and…
Descriptors: Generalization, Young Children, Inferences, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Michael D.; Vanpaemel, Wolf – Cognitive Science, 2008
This article demonstrates the potential of using hierarchical Bayesian methods to relate models and data in the cognitive sciences. This is done using a worked example that considers an existing model of category representation, the Varying Abstraction Model (VAM), which attempts to infer the representations people use from their behavior in…
Descriptors: Computation, Inferences, Cognitive Science, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Hadjichristidis, Constantinos; Sloman, Steven; Stevenson, Rosemary; Over, David – Cognitive Science, 2004
A feature is central to a concept to the extent that other features depend on it. Four studies tested the hypothesis that people will project a feature from a base concept to a target concept to the extent that they believe the feature is central to the two concepts. This centrality hypothesis implies that feature projection is guided by a…
Descriptors: Logical Thinking, Concept Formation, Inferences, Classification