Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 6 |
Descriptor
Source
Grantee Submission | 6 |
Author
McNamara, Danielle S. | 4 |
Allen, Laura K. | 1 |
April Murphy | 1 |
Balyan, Renu | 1 |
Crossley, Scott A. | 1 |
Husni Almoubayyed | 1 |
Karter, Andrew J. | 1 |
Kole A. Norberg | 1 |
Kyle Weldon | 1 |
Liu, Jennifer Y. | 1 |
Logan De Ley | 1 |
More ▼ |
Publication Type
Reports - Research | 6 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
High Schools | 2 |
Secondary Education | 2 |
Elementary Secondary Education | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Researchers | 1 |
Teachers | 1 |
Location
California | 2 |
Idaho | 1 |
Oklahoma | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Flesch Reading Ease Formula | 3 |
Flesch Kincaid Grade Level… | 1 |
Gates MacGinitie Reading Tests | 1 |
Nelson Denny Reading Tests | 1 |
What Works Clearinghouse Rating
Olney, Andrew M. – Grantee Submission, 2022
Cloze items are a foundational approach to assessing readability. However, they require human data collection, thus making them impractical in automated metrics. The present study revisits the idea of assessing readability with cloze items and compares human cloze scores and readability judgments with predictions made by T5, a popular deep…
Descriptors: Readability, Cloze Procedure, Scores, Prediction
Kole A. Norberg; Husni Almoubayyed; Logan De Ley; April Murphy; Kyle Weldon; Steve Ritter – Grantee Submission, 2024
Large language models (LLMs) offer an opportunity to make large-scale changes to educational content that would otherwise be too costly to implement. The work here highlights how LLMs (in particular GPT-4) can be prompted to revise educational math content ready for large scale deployment in real-world learning environments. We tested the ability…
Descriptors: Artificial Intelligence, Computer Software, Computational Linguistics, Educational Change
Wang, Zuowei; O'Reilly, Tenaha; Sabatini, John; McCarthy, Kathryn S.; McNamara, Danielle S. – Grantee Submission, 2021
We compared high school students' performance in a traditional comprehension assessment requiring them to identify key information and draw inferences from single texts, and a scenario-based assessment (SBA) requiring them to integrate, evaluate and apply information across multiple sources. Both assessments focused on a non-academic topic.…
Descriptors: Comparative Analysis, High School Students, Inferences, Reading Tests
Schillinger, Dean; Balyan, Renu; Crossley, Scott A.; McNamara, Danielle S.; Liu, Jennifer Y.; Karter, Andrew J. – Grantee Submission, 2020
Objective: To develop novel, scalable, and valid literacy profiles for identifying limited health literacy patients by harnessing natural language processing. Data Source: With respect to the linguistic content, we analyzed 283 216 secure messages sent by 6941 diabetes patients to physicians within an integrated system's electronic portal.…
Descriptors: Literacy, Profiles, Computational Linguistics, Syntax
McNamara, Danielle S. – Grantee Submission, 2017
This study demonstrates the generalization of previous laboratory results showing the benefits of self-explanation reading training (SERT) to college students' course exam performance. The participants were 265 students enrolled in an Introductory Biology course, 59 of whom were provided with SERT training. The results showed that SERT benefited…
Descriptors: Biology, Correlation, Introductory Courses, Knowledge Level
Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S. – Grantee Submission, 2016
A commonly held belief among educators, researchers, and students is that high-quality texts are easier to read than low-quality texts, as they contain more engaging narrative and story-like elements. Interestingly, these assumptions have typically failed to be supported by the literature on writing. Previous research suggests that higher quality…
Descriptors: Role, Writing (Composition), Natural Language Processing, Hypothesis Testing