NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mahowald, Kyle; Kachergis, George; Frank, Michael C. – First Language, 2020
Ambridge calls for exemplar-based accounts of language acquisition. Do modern neural networks such as transformers or word2vec -- which have been extremely successful in modern natural language processing (NLP) applications -- count? Although these models often have ample parametric complexity to store exemplars from their training data, they also…
Descriptors: Models, Language Processing, Computational Linguistics, Language Acquisition
Peer reviewed Peer reviewed
Direct linkDirect link
Schuler, Kathryn D.; Kodner, Jordan; Caplan, Spencer – First Language, 2020
In 'Against Stored Abstractions,' Ambridge uses neural and computational evidence to make his case against abstract representations. He argues that storing only exemplars is more parsimonious -- why bother with abstraction when exemplar models with on-the-fly calculation can do everything abstracting models can and more -- and implies that his…
Descriptors: Language Processing, Language Acquisition, Computational Linguistics, Linguistic Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Finley, Sara – First Language, 2020
In this commentary, I discuss why, despite the existence of gradience in phonetics and phonology, there is still a need for abstract representations. Most proponents of exemplar models assume multiple levels of abstraction, allowing for an integration of the gradient and the categorical. Ben Ambridge's dismissal of generative models such as…
Descriptors: Phonology, Phonetics, Abstract Reasoning, Linguistic Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Lieven, Elena; Ferry, Alissa; Theakston, Anna; Twomey, Katherine E. – First Language, 2020
During language acquisition children generalise at multiple layers of granularity. Ambridge argues that abstraction-based accounts suffer from lumping (over-general abstractions) or splitting (over-precise abstractions). Ambridge argues that the only way to overcome this conundrum is in a purely exemplar/analogy-based system in which…
Descriptors: Language Acquisition, Children, Generalization, Abstract Reasoning
Peer reviewed Peer reviewed
Direct linkDirect link
Zettersten, Martin; Schonberg, Christina; Lupyan, Gary – First Language, 2020
This article reviews two aspects of human learning: (1) people draw inferences that appear to rely on hierarchical conceptual representations; (2) some categories are much easier to learn than others given the same number of exemplars, and some categories remain difficult despite extensive training. Both of these results are difficult to reconcile…
Descriptors: Models, Language Acquisition, Prediction, Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
McClelland, James L. – First Language, 2020
Humans are sensitive to the properties of individual items, and exemplar models are useful for capturing this sensitivity. I am a proponent of an extension of exemplar-based architectures that I briefly describe. However, exemplar models are very shallow architectures in which it is necessary to stipulate a set of primitive elements that make up…
Descriptors: Models, Language Processing, Artificial Intelligence, Language Usage
Peer reviewed Peer reviewed
Direct linkDirect link
Adger, David – First Language, 2020
The syntactic behaviour of human beings cannot be explained by analogical generalization on the basis of concrete exemplars: analogies in surface form are insufficient to account for human grammatical knowledge, because they fail to hold in situations where they should, and fail to extend in situations where they need to. [For Ben Ambridge's…
Descriptors: Syntax, Figurative Language, Models, Generalization
Peer reviewed Peer reviewed
Direct linkDirect link
Hartshorne, Joshua K. – First Language, 2020
Ambridge argues that the existence of exemplar models for individual phenomena (words, inflection rules, etc.) suggests the feasibility of a unified, exemplars-everywhere model that eschews abstraction. The argument would be strengthened by a description of such a model. However, none is provided. I show that any attempt to do so would immediately…
Descriptors: Models, Language Acquisition, Language Processing, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Messenger, Katherine; Hardy, Sophie M.; Coumel, Marion – First Language, 2020
The authors argue that Ambridge's radical exemplar account of language cannot clearly explain all syntactic priming evidence, such as inverse preference effects ("greater" priming for less frequent structures), and the contrast between short-lived lexical boost and long-lived abstract priming. Moreover, without recourse to a level of…
Descriptors: Language Acquisition, Syntax, Priming, Criticism
Peer reviewed Peer reviewed
Direct linkDirect link
Chandler, Steve – First Language, 2020
Ambridge reviews and augments an impressive body of research demonstrating both the advantages and the necessity of an exemplar-based model of knowledge of one's language. He cites three computational models that have been applied successfully to issues of phonology and morphology. Focusing on Ambridge's discussion of sentence-level constructions,…
Descriptors: Models, Figurative Language, Language Processing, Language Acquisition
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jones, Michael N. – Grantee Submission, 2018
Abstraction is a core principle of Distributional Semantic Models (DSMs) that learn semantic representations for words by applying dimensional reduction to statistical redundancies in language. Although the posited learning mechanisms vary widely, virtually all DSMs are prototype models in that they create a single abstract representation of a…
Descriptors: Abstract Reasoning, Semantics, Memory, Learning Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Hou, Lynn; Morford, Jill P. – First Language, 2020
The visual-manual modality of sign languages renders them a unique test case for language acquisition and processing theories. In this commentary the authors describe evidence from signed languages, and ask whether it is consistent with Ambridge's proposal. The evidence includes recent research on collocations in American Sign Language that reveal…
Descriptors: Sign Language, Phrase Structure, American Sign Language, Syntax
Peer reviewed Peer reviewed
Direct linkDirect link
Ambridge, Ben – First Language, 2020
The goal of this article is to make the case for a radical exemplar account of child language acquisition, under which unwitnessed forms are produced and comprehended by on-the-fly analogy across multiple stored exemplars, weighted by their degree of similarity to the target with regard to the task at hand. Across the domains of (1) word meanings,…
Descriptors: Language Acquisition, Morphology (Languages), Phonetics, Phonology
Peer reviewed Peer reviewed
Direct linkDirect link
Dunabeitia, Jon Andoni; Aviles, Alberto; Afonso, Olivia; Scheepers, Christoph; Carreiras, Manuel – Cognition, 2009
In the present visual-world experiment, participants were presented with visual displays that included a target item that was a semantic associate of an abstract or a concrete word. This manipulation allowed us to test a basic prediction derived from the qualitatively different representational framework that supports the view of different…
Descriptors: Semantics, Vocabulary Development, Semiotics, Models
Peer reviewed Peer reviewed
Revlin, Russell; And Others – Journal of Educational Psychology, 1978
The conversion model of formal reasoning was examined for its ability to predict the decisions made by college students when solving concrete and abstract syllogisms. Results supported the model's contentions that reasoner's decisions reflect natural language processes in the encoding of syllogistic premises, and follow rationally from…
Descriptors: Abstract Reasoning, Classification, Cognitive Processes, Higher Education
Previous Page | Next Page ยป
Pages: 1  |  2