NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Lee, Steven Fong-yi – ProQuest LLC, 2019
In this dissertation I argue that truth-conditional semantics for vague predicates, combined with a Bayesian account of statistical inference incorporating knowledge of truth-conditions of utterances, generates false predictions regarding negations and metalinguistic inference. I thus propose a fundamentally probabilistic semantics for vagueness…
Descriptors: Semantics, Bayesian Statistics, Metalinguistics, Language Usage
Peer reviewed Peer reviewed
Direct linkDirect link
Schouwstra, Marieke; Swart, Henriëtte; Thompson, Bill – Cognitive Science, 2019
Natural languages make prolific use of conventional constituent-ordering patterns to indicate "who did what to whom," yet the mechanisms through which these regularities arise are not well understood. A series of recent experiments demonstrates that, when prompted to express meanings through silent gesture, people bypass native language…
Descriptors: Nonverbal Communication, Language Acquisition, Bayesian Statistics, Preferences
Peer reviewed Peer reviewed
Direct linkDirect link
Wellwood, Alexis; Gagliardi, Annie; Lidz, Jeffrey – Language Learning and Development, 2016
Acquiring the correct meanings of words expressing quantities ("seven, most") and qualities ("red, spotty") present a challenge to learners. Understanding how children succeed at this requires understanding, not only of what kinds of data are available to them, but also the biases and expectations they bring to the learning…
Descriptors: Syntax, Computational Linguistics, Task Analysis, Prediction
Boyd-Graber, Jordan – ProQuest LLC, 2010
Topic models like latent Dirichlet allocation (LDA) provide a framework for analyzing large datasets where observations are collected into groups. Although topic modeling has been fruitfully applied to problems social science, biology, and computer vision, it has been most widely used to model datasets where documents are modeled as exchangeable…
Descriptors: Language Patterns, Semantics, Linguistics, Multilingualism
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Fei; Tenenbaum, Joshua B. – Developmental Science, 2007
We report a new study testing our proposal that word learning may be best explained as an approximate form of Bayesian inference (Xu & Tenenbaum, in press). Children are capable of learning word meanings across a wide range of communicative contexts. In different contexts, learners may encounter different sampling processes generating the examples…
Descriptors: Semantics, Bayesian Statistics, Sampling, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Fei; Tenenbaum, Joshua B. – Psychological Review, 2007
The authors present a Bayesian framework for understanding how adults and children learn the meanings of words. The theory explains how learners can generalize meaningfully from just one or a few positive examples of a novel word's referents, by making rational inductive inferences that integrate prior knowledge about plausible word meanings with…
Descriptors: Prior Learning, Inferences, Associative Learning, Vocabulary Development