NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Burhan Ogut; Ruhan Circi; Huade Huo; Juanita Hicks; Michelle Yin – International Electronic Journal of Elementary Education, 2025
This study explored the effectiveness of extended time (ET) accommodations in the 2017 NAEP Grade 8 Mathematics assessment to enhance educational equity. Analyzing NAEP process data through an XGBoost model, we examined if early interactions with assessment items could predict students' likelihood of requiring ET by identifying those who received…
Descriptors: Identification, Testing Accommodations, National Competency Tests, Equal Education
Peer reviewed Peer reviewed
Direct linkDirect link
Petrilli, Michael J. – Education Next, 2022
In the late 1960s, when federal officials and eminent psychologists were first designing the National Assessment of Educational Progress (NAEP), they probably never contemplated testing students younger than nine. The technology for mass testing at the time--bubble sheets and No. 2 pencils--only worked if students could read the instructions and…
Descriptors: Kindergarten, Student Evaluation, National Competency Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Blair Lehman; Jesse R. Sparks; Jonathan Steinberg – ETS Research Report Series, 2024
Over the last 20 years, many methods have been proposed to use process data (e.g., response time) to detect changes in engagement during the test-taking process. However, many of these methods were developed and evaluated in highly similar testing contexts: 30 or more single-select multiple-choice items presented in a linear, fixed sequence in…
Descriptors: National Competency Tests, Secondary School Mathematics, Secondary School Students, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Congning Ni; Bhashithe Abeysinghe; Juanita Hicks – International Electronic Journal of Elementary Education, 2025
The National Assessment of Educational Progress (NAEP), often referred to as The Nation's Report Card, offers a window into the state of U.S. K-12 education system. Since 2017, NAEP has transitioned to digital assessments, opening new research opportunities that were previously impossible. Process data tracks students' interactions with the…
Descriptors: Reaction Time, Multiple Choice Tests, Behavior Change, National Competency Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Jiang, Yang; Gong, Tao; Saldivia, Luis E.; Cayton-Hodges, Gabrielle; Agard, Christopher – Large-scale Assessments in Education, 2021
In 2017, the mathematics assessments that are part of the National Assessment of Educational Progress (NAEP) program underwent a transformation shifting the administration from paper-and-pencil formats to digitally-based assessments (DBA). This shift introduced new interactive item types that bring rich process data and tremendous opportunities to…
Descriptors: Data Use, Learning Analytics, Test Items, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Bergner, Yoav; von Davier, Alina A. – Journal of Educational and Behavioral Statistics, 2019
This article reviews how National Assessment of Educational Progress (NAEP) has come to collect and analyze data about cognitive and behavioral processes (process data) in the transition to digital assessment technologies over the past two decades. An ordered five-level structure is proposed for describing the uses of process data. The levels in…
Descriptors: National Competency Tests, Data Collection, Data Analysis, Cognitive Processes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Patel, Nirmal; Sharma, Aditya; Shah, Tirth; Lomas, Derek – Journal of Educational Data Mining, 2021
Process Analysis is an emerging approach to discover meaningful knowledge from temporal educational data. The study presented in this paper shows how we used Process Analysis methods on the National Assessment of Educational Progress (NAEP) test data for modeling and predicting student test-taking behavior. Our process-oriented data exploration…
Descriptors: Learning Analytics, National Competency Tests, Evaluation Methods, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Xin Wei – Educational Researcher, 2024
This study investigates the relationship between text-to-speech (TTS) usage and item-by-item performance in the 2017 eighth-grade National Assessment of Educational Progress (NAEP) math assessment, focusing on students with disabilities (SWDs), English language learners (ELLs), and their general education (GE) peers. Results indicate that all…
Descriptors: Assistive Technology, Students with Disabilities, English Language Learners, Regular and Special Education Relationship
Peer reviewed Peer reviewed
Direct linkDirect link
Tate, Tamara P.; Warschauer, Mark – Technology, Knowledge and Learning, 2019
The quality of students' writing skills continues to concern educators. Because writing is essential to success in both college and career, poor writing can have lifelong consequences. Writing is now primarily done digitally, but students receive limited explicit instruction in digital writing. This lack of instruction means that students fail to…
Descriptors: Writing Tests, Computer Assisted Testing, Writing Skills, Writing Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Ling, Guangming – International Journal of Testing, 2016
To investigate possible iPad related mode effect, we tested 403 8th graders in Indiana, Maryland, and New Jersey under three mode conditions through random assignment: a desktop computer, an iPad alone, and an iPad with an external keyboard. All students had used an iPad or computer for six months or longer. The 2-hour test included reading, math,…
Descriptors: Educational Testing, Computer Assisted Testing, Handheld Devices, Computers
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Meredith Myra; Braude, Eric John – Journal of Educational Computing Research, 2016
The assessment of learning in large online courses requires tools that are valid, reliable, easy to administer, and can be automatically scored. We have evaluated an online assessment and learning tool called Knowledge Assembly, or Knowla. Knowla measures a student's knowledge in a particular subject by having the student assemble a set of…
Descriptors: Computer Assisted Testing, Teaching Methods, Online Courses, Critical Thinking
Peer reviewed Peer reviewed
Direct linkDirect link
Burdick, Hal; Swartz, Carl W.; Stenner, A. Jackson; Fitzgerald, Jill; Burdick, Don; Hanlon, Sean T. – Literacy Research and Instruction, 2013
The purpose of the study was to explore the validity of a novel computer-analytic developmental scale, the Writing Ability Developmental Scale. On the whole, collective results supported the validity of the scale. It was sensitive to writing ability differences across grades and sensitive to within-grade variability as compared to human-rated…
Descriptors: Test Validity, Writing Skills, Computer Assisted Testing, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bennett, Randy Elliot; Persky, Hilary; Weiss, Andy; Jenkins, Frank – Journal of Technology, Learning, and Assessment, 2010
This paper describes a study intended to demonstrate how an emerging skill, problem solving with technology, might be measured in the National Assessment of Educational Progress (NAEP). Two computer-delivered assessment scenarios were designed, one on solving science-related problems through electronic information search and the other on solving…
Descriptors: National Competency Tests, Problem Solving, Technology Uses in Education, Computer Assisted Testing
Previous Page | Next Page ยป
Pages: 1  |  2