NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Johanne Belmon; Magali Noyer-Martin; Sandra Jhean-Larose – First Language, 2024
The relationship between emotion and language in children is an emerging field of research. To carry out this type of study, researchers need to precisely manipulate the emotional parameters of the words in their experimental material. However, the number of affective norms for words in this population is still limited. To fill this gap, the…
Descriptors: Language Acquisition, Child Language, Correlation, Emotional Response
Peer reviewed Peer reviewed
Direct linkDirect link
De Cat, Cécile – First Language, 2022
The development of the Multilingual Assessment Instrument for Narratives (MAIN) has no doubt contributed to prompting a renewed interest in children's narratives. This carefully controlled test of narrative abilities elicits a rich set of measures spanning multiple linguistic domains and their interaction, including lexis, morphosyntax,…
Descriptors: Multilingualism, Narration, Measurement Techniques, Morphology (Languages)
Peer reviewed Peer reviewed
Direct linkDirect link
Ambridge, Ben – First Language, 2020
In this response to commentators, I agree with those who suggested that the distinction between exemplar- and abstraction-based accounts is something of a false dichotomy and therefore move to an abstractions-made-of-exemplars account under which (a) we store all the exemplars that we hear (subject to attention, decay, interference, etc.) but (b)…
Descriptors: Language Acquisition, Syntax, Computational Linguistics, Language Research
Peer reviewed Peer reviewed
Direct linkDirect link
MacWhinney, Brian – First Language, 2020
Ambridge argues persuasively for the importance in language learning of a rich database of input exemplars. However, a fuller account must also consider the importance of on-line and developmental competition between rote exemplar-based storage and emergent patterns that can optimize retrieval. [For Ben Ambridge's "Against Stored…
Descriptors: Competition, Language Acquisition, Rote Learning, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Koring, Loes; Giblin, Iain; Thornton, Rosalind; Crain, Stephen – First Language, 2020
This response argues against the proposal that novel utterances are formed by analogy with stored exemplars that are close in meaning. Strings of words that are similar in meaning or even identical can behave very differently once inserted into different syntactic environments. Furthermore, phrases with similar meanings but different underlying…
Descriptors: Language Acquisition, Figurative Language, Syntax, Phrase Structure
Peer reviewed Peer reviewed
Direct linkDirect link
Rose, Yvan – First Language, 2020
Ambridge's proposal cannot account for the most basic observations about phonological patterns in human languages. Outside of the earliest stages of phonological production by toddlers, the phonological systems of speakers/learners exhibit internal behaviours that point to the representation and processing of inter-related units ranging in size…
Descriptors: Phonology, Language Patterns, Toddlers, Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Naigles, Letitia R. – First Language, 2020
This commentary critiques Ambridge's radical exemplar model of language acquisition using research from the Longitudinal Study of Early Language, which has tracked the language development of 30+ children with Autism Spectrum Disorders (ASD) since 2002. This research has demonstrated that the children's capacity for abstraction at the grammatical…
Descriptors: Language Acquisition, Longitudinal Studies, Grammar, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Hartshorne, Joshua K. – First Language, 2020
Ambridge argues that the existence of exemplar models for individual phenomena (words, inflection rules, etc.) suggests the feasibility of a unified, exemplars-everywhere model that eschews abstraction. The argument would be strengthened by a description of such a model. However, none is provided. I show that any attempt to do so would immediately…
Descriptors: Models, Language Acquisition, Language Processing, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Lieven, Elena; Ferry, Alissa; Theakston, Anna; Twomey, Katherine E. – First Language, 2020
During language acquisition children generalise at multiple layers of granularity. Ambridge argues that abstraction-based accounts suffer from lumping (over-general abstractions) or splitting (over-precise abstractions). Ambridge argues that the only way to overcome this conundrum is in a purely exemplar/analogy-based system in which…
Descriptors: Language Acquisition, Children, Generalization, Abstract Reasoning
Peer reviewed Peer reviewed
Direct linkDirect link
Mahowald, Kyle; Kachergis, George; Frank, Michael C. – First Language, 2020
Ambridge calls for exemplar-based accounts of language acquisition. Do modern neural networks such as transformers or word2vec -- which have been extremely successful in modern natural language processing (NLP) applications -- count? Although these models often have ample parametric complexity to store exemplars from their training data, they also…
Descriptors: Models, Language Processing, Computational Linguistics, Language Acquisition
Peer reviewed Peer reviewed
Direct linkDirect link
Chandler, Steve – First Language, 2020
Ambridge reviews and augments an impressive body of research demonstrating both the advantages and the necessity of an exemplar-based model of knowledge of one's language. He cites three computational models that have been applied successfully to issues of phonology and morphology. Focusing on Ambridge's discussion of sentence-level constructions,…
Descriptors: Models, Figurative Language, Language Processing, Language Acquisition
Peer reviewed Peer reviewed
Direct linkDirect link
Zettersten, Martin; Schonberg, Christina; Lupyan, Gary – First Language, 2020
This article reviews two aspects of human learning: (1) people draw inferences that appear to rely on hierarchical conceptual representations; (2) some categories are much easier to learn than others given the same number of exemplars, and some categories remain difficult despite extensive training. Both of these results are difficult to reconcile…
Descriptors: Models, Language Acquisition, Prediction, Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
McClelland, James L. – First Language, 2020
Humans are sensitive to the properties of individual items, and exemplar models are useful for capturing this sensitivity. I am a proponent of an extension of exemplar-based architectures that I briefly describe. However, exemplar models are very shallow architectures in which it is necessary to stipulate a set of primitive elements that make up…
Descriptors: Models, Language Processing, Artificial Intelligence, Language Usage
Peer reviewed Peer reviewed
Direct linkDirect link
Schuler, Kathryn D.; Kodner, Jordan; Caplan, Spencer – First Language, 2020
In 'Against Stored Abstractions,' Ambridge uses neural and computational evidence to make his case against abstract representations. He argues that storing only exemplars is more parsimonious -- why bother with abstraction when exemplar models with on-the-fly calculation can do everything abstracting models can and more -- and implies that his…
Descriptors: Language Processing, Language Acquisition, Computational Linguistics, Linguistic Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Adger, David – First Language, 2020
The syntactic behaviour of human beings cannot be explained by analogical generalization on the basis of concrete exemplars: analogies in surface form are insufficient to account for human grammatical knowledge, because they fail to hold in situations where they should, and fail to extend in situations where they need to. [For Ben Ambridge's…
Descriptors: Syntax, Figurative Language, Models, Generalization
Previous Page | Next Page »
Pages: 1  |  2