Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 3 |
Descriptor
Classification | 3 |
Memory | 3 |
Natural Language Processing | 3 |
Abstract Reasoning | 2 |
Artificial Intelligence | 2 |
Models | 2 |
Vocabulary Development | 2 |
Automation | 1 |
Brain Hemisphere Functions | 1 |
Child Language | 1 |
Children | 1 |
More ▼ |
Source
Grantee Submission | 2 |
First Language | 1 |
Author
Baker, Doris Luft | 1 |
Caplan, Spencer | 1 |
Collazo, Marlen | 1 |
Jones, Michael N. | 1 |
Kamata, Akihito | 1 |
Kodner, Jordan | 1 |
Le, Nancy | 1 |
Sano, Makoto | 1 |
Schuler, Kathryn D. | 1 |
Publication Type
Journal Articles | 2 |
Reports - Research | 2 |
Reports - Evaluative | 1 |
Education Level
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 2 | 1 |
Primary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jones, Michael N. – Grantee Submission, 2018
Abstraction is a core principle of Distributional Semantic Models (DSMs) that learn semantic representations for words by applying dimensional reduction to statistical redundancies in language. Although the posited learning mechanisms vary widely, virtually all DSMs are prototype models in that they create a single abstract representation of a…
Descriptors: Abstract Reasoning, Semantics, Memory, Learning Processes
Schuler, Kathryn D.; Kodner, Jordan; Caplan, Spencer – First Language, 2020
In 'Against Stored Abstractions,' Ambridge uses neural and computational evidence to make his case against abstract representations. He argues that storing only exemplars is more parsimonious -- why bother with abstraction when exemplar models with on-the-fly calculation can do everything abstracting models can and more -- and implies that his…
Descriptors: Language Processing, Language Acquisition, Computational Linguistics, Linguistic Theory
Sano, Makoto; Baker, Doris Luft; Collazo, Marlen; Le, Nancy; Kamata, Akihito – Grantee Submission, 2020
Purpose: Explore how different automated scoring (AS) models score reliably the expressive language and vocabulary knowledge in depth of young second grade Latino English learners. Design/methodology/approach: Analyze a total of 13,471 English utterances from 217 Latino English learners with random forest, end-to-end memory networks, long…
Descriptors: English Language Learners, Hispanic American Students, Elementary School Students, Grade 2