NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lieven, Elena; Ferry, Alissa; Theakston, Anna; Twomey, Katherine E. – First Language, 2020
During language acquisition children generalise at multiple layers of granularity. Ambridge argues that abstraction-based accounts suffer from lumping (over-general abstractions) or splitting (over-precise abstractions). Ambridge argues that the only way to overcome this conundrum is in a purely exemplar/analogy-based system in which…
Descriptors: Language Acquisition, Children, Generalization, Abstract Reasoning
Peer reviewed Peer reviewed
Direct linkDirect link
McClelland, James L. – First Language, 2020
Humans are sensitive to the properties of individual items, and exemplar models are useful for capturing this sensitivity. I am a proponent of an extension of exemplar-based architectures that I briefly describe. However, exemplar models are very shallow architectures in which it is necessary to stipulate a set of primitive elements that make up…
Descriptors: Models, Language Processing, Artificial Intelligence, Language Usage
Peer reviewed Peer reviewed
Direct linkDirect link
Adger, David – First Language, 2020
The syntactic behaviour of human beings cannot be explained by analogical generalization on the basis of concrete exemplars: analogies in surface form are insufficient to account for human grammatical knowledge, because they fail to hold in situations where they should, and fail to extend in situations where they need to. [For Ben Ambridge's…
Descriptors: Syntax, Figurative Language, Models, Generalization
Peer reviewed Peer reviewed
Direct linkDirect link
Hartshorne, Joshua K. – First Language, 2020
Ambridge argues that the existence of exemplar models for individual phenomena (words, inflection rules, etc.) suggests the feasibility of a unified, exemplars-everywhere model that eschews abstraction. The argument would be strengthened by a description of such a model. However, none is provided. I show that any attempt to do so would immediately…
Descriptors: Models, Language Acquisition, Language Processing, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Rose, Yvan – First Language, 2020
Ambridge's proposal cannot account for the most basic observations about phonological patterns in human languages. Outside of the earliest stages of phonological production by toddlers, the phonological systems of speakers/learners exhibit internal behaviours that point to the representation and processing of inter-related units ranging in size…
Descriptors: Phonology, Language Patterns, Toddlers, Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Mahowald, Kyle; Kachergis, George; Frank, Michael C. – First Language, 2020
Ambridge calls for exemplar-based accounts of language acquisition. Do modern neural networks such as transformers or word2vec -- which have been extremely successful in modern natural language processing (NLP) applications -- count? Although these models often have ample parametric complexity to store exemplars from their training data, they also…
Descriptors: Models, Language Processing, Computational Linguistics, Language Acquisition
Peer reviewed Peer reviewed
Direct linkDirect link
Chandler, Steve – First Language, 2020
Ambridge reviews and augments an impressive body of research demonstrating both the advantages and the necessity of an exemplar-based model of knowledge of one's language. He cites three computational models that have been applied successfully to issues of phonology and morphology. Focusing on Ambridge's discussion of sentence-level constructions,…
Descriptors: Models, Figurative Language, Language Processing, Language Acquisition
Peer reviewed Peer reviewed
Direct linkDirect link
Hou, Lynn; Morford, Jill P. – First Language, 2020
The visual-manual modality of sign languages renders them a unique test case for language acquisition and processing theories. In this commentary the authors describe evidence from signed languages, and ask whether it is consistent with Ambridge's proposal. The evidence includes recent research on collocations in American Sign Language that reveal…
Descriptors: Sign Language, Phrase Structure, American Sign Language, Syntax
Peer reviewed Peer reviewed
Direct linkDirect link
Schuler, Kathryn D.; Kodner, Jordan; Caplan, Spencer – First Language, 2020
In 'Against Stored Abstractions,' Ambridge uses neural and computational evidence to make his case against abstract representations. He argues that storing only exemplars is more parsimonious -- why bother with abstraction when exemplar models with on-the-fly calculation can do everything abstracting models can and more -- and implies that his…
Descriptors: Language Processing, Language Acquisition, Computational Linguistics, Linguistic Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Knabe, Melina L.; Vlach, Haley A. – First Language, 2020
Ambridge argues that there is widespread agreement among child language researchers that learners store linguistic abstractions. In this commentary the authors first argue that this assumption is incorrect; anti-representationalist/exemplar views are pervasive in theories of child language. Next, the authors outline what has been learned from this…
Descriptors: Child Language, Children, Language Acquisition, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ambridge, Ben – First Language, 2020
The goal of this article is to make the case for a radical exemplar account of child language acquisition, under which unwitnessed forms are produced and comprehended by on-the-fly analogy across multiple stored exemplars, weighted by their degree of similarity to the target with regard to the task at hand. Across the domains of (1) word meanings,…
Descriptors: Language Acquisition, Morphology (Languages), Phonetics, Phonology