Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 9 |
Descriptor
Automation | 9 |
Computer Assisted Testing | 9 |
Elementary School Students | 7 |
Grade 4 | 5 |
Scoring | 5 |
Grade 5 | 4 |
Writing Evaluation | 4 |
Test Items | 3 |
Artificial Intelligence | 2 |
Evaluation Methods | 2 |
Foreign Countries | 2 |
More ▼ |
Source
Grantee Submission | 3 |
American Educational Research… | 1 |
International Journal of… | 1 |
Journal of Educational… | 1 |
ProQuest LLC | 1 |
Reading & Writing Quarterly | 1 |
Reading Research Quarterly | 1 |
Author
Aaron McVay | 1 |
Araya, Roberto | 1 |
Ayfer Sayin | 1 |
Burstein, Jill | 1 |
Carla Wood | 1 |
Chen, Dandan | 1 |
Christopher Schatschneider | 1 |
Correnti, Richard | 1 |
Douglas K. Hartman | 1 |
Hasan Kagan Keskin | 1 |
Hebert, Michael | 1 |
More ▼ |
Publication Type
Reports - Research | 8 |
Journal Articles | 5 |
Speeches/Meeting Papers | 2 |
Dissertations/Theses -… | 1 |
Education Level
Elementary Education | 9 |
Intermediate Grades | 9 |
Grade 4 | 5 |
Middle Schools | 5 |
Grade 5 | 4 |
Early Childhood Education | 2 |
Grade 3 | 2 |
Grade 6 | 2 |
Primary Education | 2 |
Grade 2 | 1 |
Grade 7 | 1 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Flesch Kincaid Grade Level… | 1 |
easyCBM | 1 |
What Works Clearinghouse Rating
Mustafa Yildiz; Hasan Kagan Keskin; Saadin Oyucu; Douglas K. Hartman; Murat Temur; Mücahit Aydogmus – Reading & Writing Quarterly, 2025
This study examined whether an artificial intelligence-based automatic speech recognition system can accurately assess students' reading fluency and reading level. Participants were 120 fourth-grade students attending public schools in Türkiye. Students read a grade-level text out loud while their voice was recorded. Two experts and the artificial…
Descriptors: Artificial Intelligence, Reading Fluency, Human Factors Engineering, Grade 4
Urrutia, Felipe; Araya, Roberto – Journal of Educational Computing Research, 2024
Written answers to open-ended questions can have a higher long-term effect on learning than multiple-choice questions. However, it is critical that teachers immediately review the answers, and ask to redo those that are incoherent. This can be a difficult task and can be time-consuming for teachers. A possible solution is to automate the detection…
Descriptors: Elementary School Students, Grade 4, Elementary School Mathematics, Mathematics Tests
Aaron McVay – ProQuest LLC, 2021
As assessments move towards computerized testing and making continuous testing available the need for rapid assembly of forms is increasing. The objective of this study was to investigate variability in assembled forms through the lens of first- and second-order equity properties of equating, by examining three factors and their interactions. Two…
Descriptors: Automation, Computer Assisted Testing, Test Items, Reaction Time
Carla Wood; Miguel Garcia-Salas; Christopher Schatschneider – Grantee Submission, 2023
Purpose: The aim of this study was to advance the analysis of written language transcripts by validating an automated scoring procedure using an automated open-access tool for calculating morphological complexity (MC) from written transcripts. Method: The MC of words in 146 written responses of students in fifth grade was assessed using two…
Descriptors: Automation, Computer Assisted Testing, Scoring, Computation
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine; Litman, Diane; Rahimi, Zahra; Kisa, Zahid – Reading Research Quarterly, 2020
Despite the importance of analytic text-based writing, relatively little is known about how to teach to this important skill. A persistent barrier to conducting research that would provide insight on best practices for teaching this form of writing is a lack of outcome measures that assess students' analytic text-based writing development and that…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Scoring
Nese, Joseph F. T.; Kahn, Josh; Kamata, Akihito – Grantee Submission, 2017
Despite prevalent use and practical application, the current and standard assessment of oral reading fluency (ORF) presents considerable limitations which reduces its validity in estimating growth and monitoring student progress, including: (a) high cost of implementation; (b) tenuous passage equivalence; and (c) bias, large standard error, and…
Descriptors: Automation, Speech, Recognition (Psychology), Scores
Madnani, Nitin; Burstein, Jill; Sabatini, John; O'Reilly, Tenaha – Grantee Submission, 2013
We introduce a cognitive framework for measuring reading comprehension that includes the use of novel summary-writing tasks. We derive NLP features from the holistic rubric used to score the summaries written by students for such tasks and use them to design a preliminary, automated scoring system. Our results show that the automated approach…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Reading Comprehension