NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Niimi, Noriyasu; Matsuura, Nobukazu – Language Testing in Asia, 2022
Introduction: This paper describes the exploratory case and initial evaluation of the computer-based testing (CBT) prototype. The advantage of CBT over paper-based testing (PBT) is that it allows us to control the order of questions and provides test takers with continuous tasks capturing their thought processes. Additionally, their response…
Descriptors: Foreign Countries, Junior High School Students, Computer Assisted Testing, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Areum; Krieger, Florian; Borgonovi, Francesca; Greiff, Samuel – Large-scale Assessments in Education, 2023
Process data are becoming more and more popular in education research. In the field of computer-based assessments of collaborative problem solving (ColPS), process data have been used to identify students' test-taking strategies while working on the assessment, and such data can be used to complement data collected on accuracy and overall…
Descriptors: Behavior Patterns, Cooperative Learning, Problem Solving, Reaction Time
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Congning Ni; Bhashithe Abeysinghe; Juanita Hicks – International Electronic Journal of Elementary Education, 2025
The National Assessment of Educational Progress (NAEP), often referred to as The Nation's Report Card, offers a window into the state of U.S. K-12 education system. Since 2017, NAEP has transitioned to digital assessments, opening new research opportunities that were previously impossible. Process data tracks students' interactions with the…
Descriptors: Reaction Time, Multiple Choice Tests, Behavior Change, National Competency Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wolf, Mikyung Kim; Yoo, Hanwook; Guzman-Orth, Danielle; Abedi, Jamal – Educational Assessment, 2022
Implementing a randomized controlled trial design, the present study investigated the effects of two types of accommodations, linguistic modification and a glossary, for English learners (ELs) taking a computer-based mathematics assessment. Process data including response time and clicks on glossary words were also examined to better interpret…
Descriptors: Testing Accommodations, English Language Learners, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Yi; Zhu, Mengxiao; Sparks, Jesse R. – Journal of Educational Computing Research, 2023
In this research, we use a process data analysis approach to gather additional evidence about students' argumentation skills beyond their performance scores in a computer-based assessment. This game-enhanced scenario-based assessment (named Seaball) included five activities that require students to demonstrate their argumentation skills within a…
Descriptors: Data Analysis, Academic Achievement, Interaction, Performance
Peer reviewed Peer reviewed
Direct linkDirect link
Takahiro Terao – Applied Measurement in Education, 2024
This study aimed to compare item characteristics and response time between stimulus conditions in computer-delivered listening tests. Listening materials had three variants: regular videos, frame-by-frame videos, and only audios without visuals. Participants were 228 Japanese high school students who were requested to complete one of nine…
Descriptors: Computer Assisted Testing, Audiovisual Aids, Reaction Time, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James; Kuhfeld, Megan; Rios, Joseph – Large-scale Assessments in Education, 2021
Low examinee effort is a major threat to valid uses of many test scores. Fortunately, several methods have been developed to detect noneffortful item responses, most of which use response times. To accurately identify noneffortful responses, one must set response time thresholds separating those responses from effortful ones. While other studies…
Descriptors: Reaction Time, Measurement, Response Style (Tests), Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time
Jing Lu; Chun Wang; Jiwei Zhang; Xue Wang – Grantee Submission, 2023
Changepoints are abrupt variations in a sequence of data in statistical inference. In educational and psychological assessments, it is pivotal to properly differentiate examinees' aberrant behaviors from solution behavior to ensure test reliability and validity. In this paper, we propose a sequential Bayesian changepoint detection algorithm to…
Descriptors: Bayesian Statistics, Behavior Patterns, Computer Assisted Testing, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Sahin, Füsun; Colvin, Kimberly F. – Large-scale Assessments in Education, 2020
The item responses of examinees who rapid-guess, who do not spend enough time reading and engaging with an item, will not reflect their true ability on that item. Rapid-disengagement refers to rapidly selecting a response to multiple-choice items (i.e., rapid-guess), omitting items, or providing short-unrelated answers to open-ended items in an…
Descriptors: Guessing (Tests), Item Response Theory, Reaction Time, Learner Engagement
Peer reviewed Peer reviewed
Direct linkDirect link
Ponce, Héctor R.; Mayer, Richard E.; Loyola, María Soledad – Journal of Educational Computing Research, 2021
One of the most common technology-enhanced items used in large-scale K-12 testing programs is the drag-and-drop response interaction. The main research questions in this study are: (a) Does adding a drag-and-drop interface to an online test affect the accuracy of student performance? (b) Does adding a drag-and-drop interface to an online test…
Descriptors: Computer Assisted Testing, Test Construction, Standardized Tests, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Lundgren, Erik; Eklöf, Hanna – Educational Research and Evaluation, 2020
The present study used process data from a computer-based problem-solving task as indications of behavioural level of test-taking effort, and explored how behavioural item-level effort related to overall test performance and self-reported effort. Variables were extracted from raw process data and clustered. Four distinct clusters were obtained and…
Descriptors: Computer Assisted Testing, Problem Solving, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Kuhfeld, Megan; Soland, James – Journal of Research on Educational Effectiveness, 2020
Educational stakeholders have long known that students might not be fully engaged when taking an achievement test and that such disengagement could undermine the inferences drawn from observed scores. Thanks to the growing prevalence of computer-based tests and the new forms of metadata they produce, researchers have developed and validated…
Descriptors: Metadata, Computer Assisted Testing, Achievement Tests, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Costa, Denise Reis; Chen, Chia-Wen – Large-scale Assessments in Education, 2023
Given the ongoing development of computer-based tasks, there has been increasing interest in modelling students' behaviour indicators from log file data with contextual variables collected via questionnaires. In this work, we apply a latent regression model to analyse the relationship between latent constructs (i.e., performance, speed, and…
Descriptors: Achievement Tests, Secondary School Students, International Assessment, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Auphan, Pauline; Ecalle, Jean; Magnan, Annie – Canadian Journal of Learning and Technology, 2020
The aim of this study is to propose advantages provided by computerized tools when assessing reading ability. A new computer-based reading assessment evaluating both word reading and reading comprehension processes was administered to 687 children in primary (N=400) and secondary (N=287) schools. Accuracy (weighted scores) and speed of access…
Descriptors: Computer Assisted Testing, Reading Tests, Reading Achievement, Reading Comprehension
Previous Page | Next Page »
Pages: 1  |  2