Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 4 |
Descriptor
Language Processing | 4 |
Language Usage | 4 |
Computational Linguistics | 3 |
Computer Software | 2 |
Contrastive Linguistics | 2 |
English (Second Language) | 2 |
Models | 2 |
Science Instruction | 2 |
Teaching Methods | 2 |
Accuracy | 1 |
Anatomy | 1 |
More ▼ |
Source
Grantee Submission | 4 |
Author
Allen, Laura K. | 1 |
Arun-Balajiee… | 1 |
Jeevan Chapagain | 1 |
Klauda, Susan Lutz | 1 |
McNamara, Danielle S. | 1 |
Mills, Caitlin | 1 |
Mohammad Hassany | 1 |
Olney, Andrew M. | 1 |
Perret, Cecile | 1 |
Priti Oli | 1 |
Rabin Banjade | 1 |
More ▼ |
Publication Type
Reports - Research | 4 |
Speeches/Meeting Papers | 3 |
Tests/Questionnaires | 1 |
Education Level
Elementary Education | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Higher Education | 1 |
Intermediate Grades | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Flesch Reading Ease Formula | 1 |
What Works Clearinghouse Rating

Arun-Balajiee Lekshmi-Narayanan; Priti Oli; Jeevan Chapagain; Mohammad Hassany; Rabin Banjade; Vasile Rus – Grantee Submission, 2024
Worked examples, which present an explained code for solving typical programming problems are among the most popular types of learning content in programming classes. Most approaches and tools for presenting these examples to students are based on line-by-line explanations of the example code. However, instructors rarely have time to provide…
Descriptors: Coding, Computer Science Education, Computational Linguistics, Artificial Intelligence
Allen, Laura K.; Mills, Caitlin; Perret, Cecile; McNamara, Danielle S. – Grantee Submission, 2019
This study examines the extent to which instructions to self-explain vs. "other"-explain a text lead readers to produce different forms of explanations. Natural language processing was used to examine the content and characteristics of the explanations produced as a function of instruction condition. Undergraduate students (n = 146)…
Descriptors: Language Processing, Science Instruction, Computational Linguistics, Teaching Methods
Olney, Andrew M. – Grantee Submission, 2021
This paper explores a general approach to paraphrase generation using a pre-trained seq2seq model fine-tuned using a back-translated anatomy and physiology textbook. Human ratings indicate that the paraphrase model generally preserved meaning and grammaticality/fluency: 70% of meaning ratings were above 75, and 40% of paraphrases were considered…
Descriptors: Translation, Language Processing, Error Analysis (Language), Grammar
Taboada Barber, Ana; Klauda, Susan Lutz; Stapleton, Laura – Grantee Submission, 2020
Previous studies offer mixed evidence regarding whether a unified model of reading comprehension predictors applies to Dual Language Learners (DLLs) and English Speakers (ESs), or whether distinctive models across language groups are empirically supported. The present study adds another dimension to this body of work by examining multiple reading…
Descriptors: Reading Comprehension, Bilingualism, Reading Motivation, Predictor Variables