NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20257
Since 2022 (last 5 years)54
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 54 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Q. Feltgen; G. Cislaru – Discourse Processes: A Multidisciplinary Journal, 2025
The broader aim of this study is the corpus-based investigation of the written language production process. To this end, temporal markers have been keylog recorded alongside the writing processes to exploit pauses to segment the speech product into linear units of performance. However, identifying these pauses requires selecting the relevant…
Descriptors: Writing Processes, Writing Skills, Written Language, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Nathan Lowien; Damon P. Thomas – Australian Journal of Language and Literacy, 2025
Cognitive-informed reading education research utilises models that are underpinned by the notion that reading is a mental process of word recognition multiplied by language comprehension. Examples of these models include the Simple View of Reading, the Cognitive Foundations Framework, the Reading Rope and the Active Model of Reading. These models…
Descriptors: Reading Research, Reading Instruction, Reading Processes, Word Recognition
Peer reviewed Peer reviewed
Direct linkDirect link
Gerald Gartlehner; Leila Kahwati; Rainer Hilscher; Ian Thomas; Shannon Kugley; Karen Crotty; Meera Viswanathan; Barbara Nussbaumer-Streit; Graham Booth; Nathaniel Erskine; Amanda Konet; Robert Chew – Research Synthesis Methods, 2024
Data extraction is a crucial, yet labor-intensive and error-prone part of evidence synthesis. To date, efforts to harness machine learning for enhancing efficiency of the data extraction process have fallen short of achieving sufficient accuracy and usability. With the release of large language models (LLMs), new possibilities have emerged to…
Descriptors: Data Collection, Evidence, Synthesis, Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
John Hollander; Andrew Olney – Cognitive Science, 2024
Recent investigations on how people derive meaning from language have focused on task-dependent shifts between two cognitive systems. The symbolic (amodal) system represents meaning as the statistical relationships between words. The embodied (modal) system represents meaning through neurocognitive simulation of perceptual or sensorimotor systems…
Descriptors: Verbs, Symbolic Language, Language Processing, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Duncan Gillard; Sarah Cassidy; Ben Anderson – Educational Psychology in Practice, 2025
B. F. Skinner's work in the field of verbal behaviour represented a movement of global significance. However, in today's age, even those who appreciate its profound importance in the archives of psychology accept that it did not sufficiently account for complex human language. Recent advances in psychological science have led to the emergence of a…
Descriptors: Educational Psychology, Behavior Theories, Mental Health, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Abu-Zhaya, Rana; Arnon, Inbal; Borovsky, Arielle – Cognitive Science, 2022
Meaning in language emerges from multiple words, and children are sensitive to multi-word frequency from infancy. While children successfully use cues from single words to generate linguistic predictions, it is less clear whether and how they use multi-word sequences to guide real-time language processing and whether they form predictions on the…
Descriptors: Sentences, Language Processing, Semantics, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Stefan E. Huber; Kristian Kiili; Steve Nebel; Richard M. Ryan; Michael Sailer; Manuel Ninaus – Educational Psychology Review, 2024
This perspective piece explores the transformative potential and associated challenges of large language models (LLMs) in education and how those challenges might be addressed utilizing playful and game-based learning. While providing many opportunities, the stochastic elements incorporated in how present LLMs process text, requires domain…
Descriptors: Artificial Intelligence, Language Processing, Models, Play
Nika Jurov – ProQuest LLC, 2024
Speech is a complex, redundant and variable signal happening in a noisy and ever changing world. How do listeners navigate these complex auditory scenes and continuously and effortlessly understand most of the speakers around them? Studies show that listeners can quickly adapt to new situations, accents and even to distorted speech. Although prior…
Descriptors: Models, Auditory Perception, Speech Communication, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen J. Lupker; Giacomo Spinelli – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2023
Rastle et al. (2004) reported that true (e.g., walker) and pseudo (e.g., corner) multi-morphemic words prime their stem words more than form controls do (e.g., brothel priming BROTH) in a masked priming lexical decision task. This data pattern has led a number of models to propose that both of the former word types are "decomposed" into…
Descriptors: Models, Morphemes, Priming, Vocabulary
Peer reviewed Peer reviewed
Direct linkDirect link
Tal Ness; Valerie J. Langlois; Albert E. Kim; Jared M. Novick – Perspectives on Psychological Science, 2025
Understanding language requires readers and listeners to cull meaning from fast-unfolding messages that often contain conflicting cues pointing to incompatible ways of interpreting the input (e.g., "The cat was chased by the mouse"). This article reviews mounting evidence from multiple methods demonstrating that cognitive control plays…
Descriptors: Cognitive Ability, Language Processing, Psycholinguistics, Cues
Huteng Dai – ProQuest LLC, 2024
In this dissertation, I establish a research program that uses computational modeling as a testbed for theories of phonological learning. This dissertation focuses on a fundamental question: how do children acquire sound patterns from noisy, real-world data, especially in the presence of lexical exceptions that defy regular patterns? For instance,…
Descriptors: Phonology, Language Acquisition, Computational Linguistics, Linguistic Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Brehm, Laurel; Cho, Pyeong Whan; Smolensky, Paul; Goldrick, Matthew A. – Cognitive Science, 2022
Subject-verb agreement errors are common in sentence production. Many studies have used experimental paradigms targeting the production of subject-verb agreement from a sentence preamble ("The key to the cabinets") and eliciting verb errors (… "*were shiny"). Through reanalysis of previous data (50 experiments; 102,369…
Descriptors: Sentences, Sentence Structure, Grammar, Verbs
Peer reviewed Peer reviewed
Direct linkDirect link
Stanojevic, Miloš; Brennan, Jonathan R.; Dunagan, Donald; Steedman, Mark; Hale, John T. – Cognitive Science, 2023
To model behavioral and neural correlates of language comprehension in naturalistic environments, researchers have turned to broad-coverage tools from natural-language processing and machine learning. Where syntactic structure is explicitly modeled, prior work has relied predominantly on context-free grammars (CFGs), yet such formalisms are not…
Descriptors: Correlation, Language Processing, Brain Hemisphere Functions, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mohsen Dolatabadi – Australian Journal of Applied Linguistics, 2023
Many datasets resulting from participant ratings for word norms and also concreteness ratios are available. However, the concreteness information of infrequent words and non-words is rare. This work aims to propose a model for estimating the concreteness of infrequent and new lexicons. Here, we used Lancaster sensory-motor word norms to predict…
Descriptors: Prediction, Validity, Models, Computational Linguistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Paul Meara; Imma Miralpeix – Vocabulary Learning and Instruction, 2025
This paper is part 5 of a series of workshops that examines the properties of some simple models of vocabulary networks. While previous workshops dealt with activating words in the network, this workshop focuses on vocabulary loss. We will simulate two possible ways of modelling attrition: (a) explicitly turning active words OFF, and (b) raising…
Descriptors: Vocabulary Development, Workshops, Models, Networks
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4