NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 70 results Save | Export
Kristen Panzarella; Angela Walmsley – Phi Delta Kappan, 2025
Computer-based testing is becoming dominant for assessments in education. In New York, students take state assessments, which are now administered digitally. While this transition in technology offers advantages, there are also challenges, including insufficient digital literacy for students to adequately meet the technological demands of the…
Descriptors: Computer Assisted Testing, Standardized Tests, Barriers, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Mustafa Yildiz; Hasan Kagan Keskin; Saadin Oyucu; Douglas K. Hartman; Murat Temur; Mücahit Aydogmus – Reading & Writing Quarterly, 2025
This study examined whether an artificial intelligence-based automatic speech recognition system can accurately assess students' reading fluency and reading level. Participants were 120 fourth-grade students attending public schools in Türkiye. Students read a grade-level text out loud while their voice was recorded. Two experts and the artificial…
Descriptors: Artificial Intelligence, Reading Fluency, Human Factors Engineering, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Urrutia, Felipe; Araya, Roberto – Journal of Educational Computing Research, 2024
Written answers to open-ended questions can have a higher long-term effect on learning than multiple-choice questions. However, it is critical that teachers immediately review the answers, and ask to redo those that are incoherent. This can be a difficult task and can be time-consuming for teachers. A possible solution is to automate the detection…
Descriptors: Elementary School Students, Grade 4, Elementary School Mathematics, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Araneda, Sergio; Lee, Dukjae; Lewis, Jennifer; Sireci, Stephen G.; Moon, Jung Aa; Lehman, Blair; Arslan, Burcu; Keehner, Madeleine – Education Sciences, 2022
Students exhibit many behaviors when responding to items on a computer-based test, but only some of these behaviors are relevant to estimating their proficiencies. In this study, we analyzed data from computer-based math achievement tests administered to elementary school students in grades 3 (ages 8-9) and 4 (ages 9-10). We investigated students'…
Descriptors: Student Behavior, Academic Achievement, Computer Assisted Testing, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Turner, Megan I.; Van Norman, Ethan R.; Hojnoski, Robin L. – Journal of Psychoeducational Assessment, 2022
Star Math (SM) is a popular computer adaptive test (CAT) schools use to screen students for academic risk. Despite its popularity, few independent investigations of its diagnostic accuracy have been conducted. We evaluated the diagnostic accuracy of SM based upon vendor provided cut-scores (25th and 40th percentiles nationally) in predicting…
Descriptors: Accuracy, Adaptive Testing, Computer Assisted Testing, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hess, Stefan; Mousikou, Petroula; Schroeder, Sascha – Reading and Writing: An Interdisciplinary Journal, 2022
In this study, we investigated effects of morphological processing on handwriting production in beginning writers of German. Children from Grades 3 and 4 were asked to copy words from a computer screen onto a pen tablet, while we recorded their handwriting with high spatiotemporal resolution. Words involved a syllable-congruent visual disruption…
Descriptors: Morphology (Languages), Language Processing, Handwriting, Morphemes
Peer reviewed Peer reviewed
Direct linkDirect link
Van Norman, Ethan R.; Forcht, Emily R. – Assessment for Effective Intervention, 2023
This study explored the validity of growth on two computer adaptive tests, Star Reading and Star Math, in explaining performance on an end-of-year achievement test for a sample of students in Grades 3 through 6. Results from quantile regression analyses indicate that growth on Star Reading explained a statistically significant amount of variance…
Descriptors: Test Validity, Computer Assisted Testing, Adaptive Testing, Grade Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Kayla V. Campaña; Benjamin G. Solomon – Assessment for Effective Intervention, 2025
The purpose of this study was to compare the classification accuracy of data produced by the previous year's end-of-year New York state assessment, a computer-adaptive diagnostic assessment ("i-Ready"), and the gating combination of both assessments to predict the rate of students passing the following year's end-of-year state assessment…
Descriptors: Accuracy, Classification, Diagnostic Tests, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Márió Tibor Nagy; Erzsébet Korom – Journal of Baltic Science Education, 2023
Nowadays, the assessment of student performance has become increasingly technology-based, a trend that can also be observed in the evaluation of scientific reasoning, with more and more of the formerly paper-based assessment tools moving into the digital space. The study aimed to examine the reliability and validity of the paper-based and…
Descriptors: Science Process Skills, Elementary School Students, Grade 4, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ethan R. Van Norman; Emily R. Forcht – Journal of Education for Students Placed at Risk, 2024
This study evaluated the forecasting accuracy of trend estimation methods applied to time-series data from computer adaptive tests (CATs). Data were collected roughly once a month over the course of a school year. We evaluated the forecasting accuracy of two regression-based growth estimation methods (ordinary least squares and Theil-Sen). The…
Descriptors: Data Collection, Predictive Measurement, Predictive Validity, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Jiang, Yang; Gong, Tao; Saldivia, Luis E.; Cayton-Hodges, Gabrielle; Agard, Christopher – Large-scale Assessments in Education, 2021
In 2017, the mathematics assessments that are part of the National Assessment of Educational Progress (NAEP) program underwent a transformation shifting the administration from paper-and-pencil formats to digitally-based assessments (DBA). This shift introduced new interactive item types that bring rich process data and tremendous opportunities to…
Descriptors: Data Use, Learning Analytics, Test Items, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Yishen Song; Liming Guo; Qinhua Zheng – Education and Information Technologies, 2025
Scientific inquiry ability is closely related to the process of hands-on inquiry practice. However, its assessment is often separated from this practice due to the limitation of technical basis and labor cost. The development of multimodal data analysis provides a new opportunity to realize automated assessment based on hands-on practice.…
Descriptors: Elementary School Students, Grade 4, Hands on Science, Experiential Learning
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5