Publication Date
In 2025 | 1 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 11 |
Since 2016 (last 10 years) | 14 |
Since 2006 (last 20 years) | 19 |
Descriptor
Automation | 19 |
Computer Assisted Testing | 19 |
Elementary School Students | 11 |
Scoring | 11 |
Writing Evaluation | 10 |
Grade 4 | 6 |
Scores | 6 |
Writing Tests | 6 |
Foreign Countries | 5 |
Artificial Intelligence | 4 |
Evaluation Methods | 4 |
More ▼ |
Source
Author
Wilson, Joshua | 3 |
Chen, Dandan | 2 |
Hebert, Michael | 2 |
Aaron McVay | 1 |
Abhishek Dasgupta | 1 |
Araya, Roberto | 1 |
Ayfer Sayin | 1 |
Barnes, Tiffany, Ed. | 1 |
Ben-Simon, Anat | 1 |
Bennett, Randy Elliott | 1 |
Bryn Humphrey | 1 |
More ▼ |
Publication Type
Reports - Research | 16 |
Journal Articles | 13 |
Dissertations/Theses -… | 2 |
Speeches/Meeting Papers | 2 |
Collected Works - Proceedings | 1 |
Education Level
Elementary Education | 19 |
Intermediate Grades | 9 |
Grade 4 | 7 |
Middle Schools | 7 |
Secondary Education | 6 |
Grade 5 | 4 |
Early Childhood Education | 3 |
Grade 6 | 3 |
Grade 7 | 3 |
Grade 8 | 3 |
Junior High Schools | 3 |
More ▼ |
Audience
Location
Turkey | 2 |
Utah | 2 |
Australia | 1 |
Czech Republic | 1 |
Florida | 1 |
Israel | 1 |
Massachusetts | 1 |
Netherlands | 1 |
North Carolina | 1 |
Oregon | 1 |
Pennsylvania | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Flesch Kincaid Grade Level… | 1 |
Massachusetts Comprehensive… | 1 |
easyCBM | 1 |
What Works Clearinghouse Rating
Cathy Cavanaugh; Bryn Humphrey; Paige Pullen – International Journal on E-Learning, 2024
To address needs in one US state to provide a professional development micro-credential for tens of thousands of educators, we automated an assignment scoring workflow in an online course by developing and refining an AI model to scan submitted assignments and score them against a rubric. This article outlines the AI model development process and…
Descriptors: Artificial Intelligence, Automation, Scoring, Microcredentials
Mustafa Yildiz; Hasan Kagan Keskin; Saadin Oyucu; Douglas K. Hartman; Murat Temur; Mücahit Aydogmus – Reading & Writing Quarterly, 2025
This study examined whether an artificial intelligence-based automatic speech recognition system can accurately assess students' reading fluency and reading level. Participants were 120 fourth-grade students attending public schools in Türkiye. Students read a grade-level text out loud while their voice was recorded. Two experts and the artificial…
Descriptors: Artificial Intelligence, Reading Fluency, Human Factors Engineering, Grade 4
Urrutia, Felipe; Araya, Roberto – Journal of Educational Computing Research, 2024
Written answers to open-ended questions can have a higher long-term effect on learning than multiple-choice questions. However, it is critical that teachers immediately review the answers, and ask to redo those that are incoherent. This can be a difficult task and can be time-consuming for teachers. A possible solution is to automate the detection…
Descriptors: Elementary School Students, Grade 4, Elementary School Mathematics, Mathematics Tests
Aaron McVay – ProQuest LLC, 2021
As assessments move towards computerized testing and making continuous testing available the need for rapid assembly of forms is increasing. The objective of this study was to investigate variability in assembled forms through the lens of first- and second-order equity properties of equating, by examining three factors and their interactions. Two…
Descriptors: Automation, Computer Assisted Testing, Test Items, Reaction Time
Carla Wood; Miguel Garcia-Salas; Christopher Schatschneider – Grantee Submission, 2023
Purpose: The aim of this study was to advance the analysis of written language transcripts by validating an automated scoring procedure using an automated open-access tool for calculating morphological complexity (MC) from written transcripts. Method: The MC of words in 146 written responses of students in fifth grade was assessed using two…
Descriptors: Automation, Computer Assisted Testing, Scoring, Computation
Charles Hulme; Joshua McGrane; Mihaela Duta; Gillian West; Denise Cripps; Abhishek Dasgupta; Sarah Hearne; Rachel Gardner; Margaret Snowling – Language, Speech, and Hearing Services in Schools, 2024
Purpose: Oral language skills provide a critical foundation for formal education and especially for the development of children's literacy (reading and spelling) skills. It is therefore important for teachers to be able to assess children's language skills, especially if they are concerned about their learning. We report the development and…
Descriptors: Automation, Language Tests, Standardized Tests, Test Construction
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Sterett H. Mercer; Joanna E. Cannon – Grantee Submission, 2022
We evaluated the validity of an automated approach to learning progress assessment (aLPA) for English written expression. Participants (n = 105) were students in Grades 2-12 who had parent-identified learning difficulties and received academic tutoring through a community-based organization. Participants completed narrative writing samples in the…
Descriptors: Elementary School Students, Secondary School Students, Learning Problems, Learning Disabilities
Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine; Litman, Diane; Rahimi, Zahra; Kisa, Zahid – Reading Research Quarterly, 2020
Despite the importance of analytic text-based writing, relatively little is known about how to teach to this important skill. A persistent barrier to conducting research that would provide insight on best practices for teaching this form of writing is a lack of outcome measures that assess students' analytic text-based writing development and that…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Scoring
Wilson, Joshua; Chen, Dandan; Sandbank, Micheal P.; Hebert, Michael – Journal of Educational Psychology, 2019
The present study examined issues pertaining to the reliability of writing assessment in the elementary grades, and among samples of struggling and nonstruggling writers. The present study also extended nascent research on the reliability and the practical applications of automated essay scoring (AES) systems in Response to Intervention frameworks…
Descriptors: Computer Assisted Testing, Automation, Scores, Writing Tests
Yue Huang – ProQuest LLC, 2023
Automated writing evaluation (AWE) is a cutting-edge technology-based intervention designed to help teachers meet their challenges in writing classrooms and improve students' writing proficiency. The fast development of AWE systems, along with the encouragement of technology use in the U.S. K-12 education system by the Common Core State Standards…
Descriptors: Computer Assisted Testing, Writing Tests, Automation, Writing Evaluation
Nese, Joseph F. T.; Kahn, Josh; Kamata, Akihito – Grantee Submission, 2017
Despite prevalent use and practical application, the current and standard assessment of oral reading fluency (ORF) presents considerable limitations which reduces its validity in estimating growth and monitoring student progress, including: (a) high cost of implementation; (b) tenuous passage equivalence; and (c) bias, large standard error, and…
Descriptors: Automation, Speech, Recognition (Psychology), Scores
Yarnell, Jordy B.; Pfeiffer, Steven I. – Journal of Psychoeducational Assessment, 2015
The present study examined the psychometric equivalence of administering a computer-based version of the Gifted Rating Scale (GRS) compared with the traditional paper-and-pencil GRS-School Form (GRS-S). The GRS-S is a teacher-completed rating scale used in gifted assessment. The GRS-Electronic Form provides an alternative method of administering…
Descriptors: Gifted, Psychometrics, Rating Scales, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1 | 2