Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 3 |
Descriptor
| Written Language | 3 |
| Computational Linguistics | 2 |
| Contrastive Linguistics | 2 |
| Language Processing | 2 |
| Models | 2 |
| Anatomy | 1 |
| Automation | 1 |
| Chinese | 1 |
| Cognitive Mapping | 1 |
| Computer Software | 1 |
| English (Second Language) | 1 |
| More ▼ | |
Source
| Grantee Submission | 3 |
Author
| Arthur C. Grasser | 1 |
| Danielle S. McNamara | 1 |
| David C. Plaut | 1 |
| Laura K. Allen | 1 |
| Olney, Andrew M. | 1 |
| Patience Stevens | 1 |
Publication Type
| Journal Articles | 1 |
| Reports - Descriptive | 1 |
| Reports - Evaluative | 1 |
| Reports - Research | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Patience Stevens; David C. Plaut – Grantee Submission, 2022
The morphological structure of complex words impacts how they are processed during visual word recognition. This impact varies over the course of reading acquisition and for different languages and writing systems. Many theories of morphological processing rely on a decomposition mechanism, in which words are decomposed into explicit…
Descriptors: Written Language, Morphology (Languages), Word Recognition, Reading Processes
Olney, Andrew M. – Grantee Submission, 2021
This paper explores a general approach to paraphrase generation using a pre-trained seq2seq model fine-tuned using a back-translated anatomy and physiology textbook. Human ratings indicate that the paraphrase model generally preserved meaning and grammaticality/fluency: 70% of meaning ratings were above 75, and 40% of paraphrases were considered…
Descriptors: Translation, Language Processing, Error Analysis (Language), Grammar

Peer reviewed
Direct link
