Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 16 |
Since 2006 (last 20 years) | 44 |
Descriptor
Source
Author
Publication Type
Education Level
Higher Education | 4 |
Postsecondary Education | 3 |
Adult Education | 1 |
Elementary Education | 1 |
Grade 3 | 1 |
Grade 5 | 1 |
Audience
Practitioners | 1 |
Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Digit Span Test | 1 |
Test of English as a Foreign… | 1 |
Wechsler Adult Intelligence… | 1 |
Wechsler Memory Scale | 1 |
What Works Clearinghouse Rating
Dadi Ramesh; Suresh Kumar Sanampudi – European Journal of Education, 2024
Automatic essay scoring (AES) is an essential educational application in natural language processing. This automated process will alleviate the burden by increasing the reliability and consistency of the assessment. With the advances in text embedding libraries and neural network models, AES systems achieved good results in terms of accuracy.…
Descriptors: Scoring, Essays, Writing Evaluation, Memory
Dragos Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2023
Reading comprehension is essential for both knowledge acquisition and memory reinforcement. Automated modeling of the comprehension process provides insights into the efficacy of specific texts as learning tools. This paper introduces an improved version of the Automated Model of Comprehension, version 3.0 (AMoC v3.0). AMoC v3.0 is based on two…
Descriptors: Reading Comprehension, Models, Concept Mapping, Graphs
Mahowald, Kyle; Kachergis, George; Frank, Michael C. – First Language, 2020
Ambridge calls for exemplar-based accounts of language acquisition. Do modern neural networks such as transformers or word2vec -- which have been extremely successful in modern natural language processing (NLP) applications -- count? Although these models often have ample parametric complexity to store exemplars from their training data, they also…
Descriptors: Models, Language Processing, Computational Linguistics, Language Acquisition
Schuler, Kathryn D.; Kodner, Jordan; Caplan, Spencer – First Language, 2020
In 'Against Stored Abstractions,' Ambridge uses neural and computational evidence to make his case against abstract representations. He argues that storing only exemplars is more parsimonious -- why bother with abstraction when exemplar models with on-the-fly calculation can do everything abstracting models can and more -- and implies that his…
Descriptors: Language Processing, Language Acquisition, Computational Linguistics, Linguistic Theory
Zhurkenovich, Saurbayev Rishat; Kozhamuratkyzy, Zhetpisbay Aliya; Khatipovna, Demessinova Galina; Tasbulatovna, Kulbayeva Baglan; Aisovich, Vafeev Ravil – Arab World English Journal, 2021
The article is devoted to studying the principles of the language economy of modern English word-forming. The most productive ways of word-formation are highlighted, illustrating the tendency of the language to compress nominative units. In the system of English word-formation, the most effective ways to save speech are affixal word formation,…
Descriptors: Language Styles, English, Morphemes, Vocabulary
Divjak, Dagmar; Milin, Petar; Medimorec, Srdan; Borowski, Maciej – Cognitive Science, 2022
Although there is a broad consensus that both the procedural and declarative memory systems play a crucial role in language learning, use, and knowledge, the mapping between linguistic types and memory structures remains underspecified: by default, a dual-route mapping of language systems to memory systems is assumed, with declarative memory…
Descriptors: Memory, Grammar, Vocabulary Development, Language Processing
Corlatescu, Dragos-Georgian; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2021
Reading comprehension is key to knowledge acquisition and to reinforcing memory for previous information. While reading, a mental representation is constructed in the reader's mind. The mental model comprises the words in the text, the relations between the words, and inferences linking to concepts in prior knowledge. The automated model of…
Descriptors: Reading Comprehension, Memory, Inferences, Syntax
Messenger, Katherine; Hardy, Sophie M.; Coumel, Marion – First Language, 2020
The authors argue that Ambridge's radical exemplar account of language cannot clearly explain all syntactic priming evidence, such as inverse preference effects ("greater" priming for less frequent structures), and the contrast between short-lived lexical boost and long-lived abstract priming. Moreover, without recourse to a level of…
Descriptors: Language Acquisition, Syntax, Priming, Criticism
Brooks, Patricia J.; Kempe, Vera – First Language, 2020
The radical exemplar model resonates with work on perceptual classification and categorization highlighting the role of exemplars in memory representations. Further development of the model requires acknowledgment of both the fleeting and fragile nature of perceptual representations and the gist-based, good-enough quality of long-term memory…
Descriptors: Models, Language Acquisition, Classification, Memory
Johns, Brendan T.; Jones, Michael N.; Mewhort, D. J. K. – Grantee Submission, 2019
To account for natural variability in cognitive processing, it is standard practice to optimize a model's parameters by fitting it to behavioral data. Although most language-related theories acknowledge a large role for experience in language processing, variability reflecting that knowledge is usually ignored when evaluating a model's fit to…
Descriptors: Language Processing, Models, Information Sources, Linguistics
Jones, Michael N. – Grantee Submission, 2018
Abstraction is a core principle of Distributional Semantic Models (DSMs) that learn semantic representations for words by applying dimensional reduction to statistical redundancies in language. Although the posited learning mechanisms vary widely, virtually all DSMs are prototype models in that they create a single abstract representation of a…
Descriptors: Abstract Reasoning, Semantics, Memory, Learning Processes
Brouwer, Harm; Crocker, Matthew W.; Venhuizen, Noortje J.; Hoeks, John C. J. – Cognitive Science, 2017
Ten years ago, researchers using event-related brain potentials (ERPs) to study language comprehension were puzzled by what looked like a "Semantic Illusion": Semantically anomalous, but structurally well-formed sentences did not affect the N400 component--traditionally taken to reflect semantic integration--but instead produced a P600…
Descriptors: Diagnostic Tests, Brain Hemisphere Functions, Language Processing, Semantics
Ambridge, Ben – First Language, 2020
The goal of this article is to make the case for a radical exemplar account of child language acquisition, under which unwitnessed forms are produced and comprehended by on-the-fly analogy across multiple stored exemplars, weighted by their degree of similarity to the target with regard to the task at hand. Across the domains of (1) word meanings,…
Descriptors: Language Acquisition, Morphology (Languages), Phonetics, Phonology
Heyselaar, Evelien; Wheeldon, Linda; Segaert, Katrien – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2021
Structural priming is the tendency to repeat syntactic structure across sentences and can be divided into short-term (prime to immediately following target) and long-term (across an experimental session) components. This study investigates how nondeclarative memory could support both the transient, short-term and the persistent, long-term…
Descriptors: Priming, Memory, Short Term Memory, Perception
Jones, Michael N.; Dye, Melody; Johns, Brendan T. – Grantee Submission, 2017
Classic accounts of lexical organization posit that humans are sensitive to environmental frequency, suggesting a mechanism for word learning based on repetition. However, a recent spate of evidence has revealed that it is not simply frequency but the diversity and distinctiveness of contexts in which a word occurs that drives lexical…
Descriptors: Word Frequency, Vocabulary Development, Context Effect, Semantics