Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 9 |
Descriptor
Adaptive Testing | 9 |
Reaction Time | 9 |
Computer Assisted Testing | 6 |
Item Response Theory | 5 |
Test Items | 5 |
Accuracy | 3 |
Reading Tests | 3 |
Achievement Tests | 2 |
Middle School Students | 2 |
Ability Identification | 1 |
Algorithms | 1 |
More ▼ |
Source
Journal of Educational… | 2 |
Large-scale Assessments in… | 2 |
Applied Measurement in… | 1 |
Educational Measurement:… | 1 |
Educational Technology &… | 1 |
Grantee Submission | 1 |
Journal of Educational and… | 1 |
Author
Ben Stenhaug | 2 |
Benjamin W. Domingue | 2 |
Chris Piech | 2 |
James Soland | 2 |
Klint Kanopka | 2 |
Megan Kuhfeld | 2 |
Steve Wise | 2 |
Bulut, Okan | 1 |
Chang, Hua-Hua | 1 |
Gorgun, Guher | 1 |
He, Yinhong | 1 |
More ▼ |
Publication Type
Reports - Research | 9 |
Journal Articles | 8 |
Numerical/Quantitative Data | 1 |
Education Level
Elementary Education | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Secondary Education | 2 |
Grade 8 | 1 |
Audience
Location
China | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Measures of Academic Progress | 2 |
What Works Clearinghouse Rating
He, Yinhong; Qi, Yuanyuan – Journal of Educational Measurement, 2023
In multidimensional computerized adaptive testing (MCAT), item selection strategies are generally constructed based on responses, and they do not consider the response times required by items. This study constructed two new criteria (referred to as DT-inc and DT) for MCAT item selection by utilizing information from response times. The new designs…
Descriptors: Reaction Time, Adaptive Testing, Computer Assisted Testing, Test Items
Benjamin W. Domingue; Klint Kanopka; Ben Stenhaug; James Soland; Megan Kuhfeld; Steve Wise; Chris Piech – Grantee Submission, 2021
The more frequent collection of response time data is leading to an increased need for an understanding of how such data can be included in measurement models. Models for response time have been advanced, but relatively limited large-scale empirical investigations have been conducted. We take advantage of a large dataset from the adaptive NWEA MAP…
Descriptors: Achievement Tests, Reaction Time, Reading Tests, Accuracy
Benjamin W. Domingue; Klint Kanopka; Ben Stenhaug; James Soland; Megan Kuhfeld; Steve Wise; Chris Piech – Journal of Educational Measurement, 2021
The more frequent collection of response time data is leading to an increased need for an understanding of how such data can be included in measurement models. Models for response time have been advanced, but relatively limited large-scale empirical investigations have been conducted. We take advantage of a large data set from the adaptive NWEA…
Descriptors: Achievement Tests, Reaction Time, Reading Tests, Accuracy
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Yi-Hsuan Lee; Yue Jia – Applied Measurement in Education, 2024
Test-taking experience is a consequence of the interaction between students and assessment properties. We define a new notion, rapid-pacing behavior, to reflect two types of test-taking experience -- disengagement and speededness. To identify rapid-pacing behavior, we extend existing methods to develop response-time thresholds for individual items…
Descriptors: Adaptive Testing, Reaction Time, Item Response Theory, Test Format
Kang, Hyeon-Ah; Zheng, Yi; Chang, Hua-Hua – Journal of Educational and Behavioral Statistics, 2020
With the widespread use of computers in modern assessment, online calibration has become increasingly popular as a way of replenishing an item pool. The present study discusses online calibration strategies for a joint model of responses and response times. The study proposes likelihood inference methods for item paramter estimation and evaluates…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Reaction Time
Soland, James; Kuhfeld, Megan; Rios, Joseph – Large-scale Assessments in Education, 2021
Low examinee effort is a major threat to valid uses of many test scores. Fortunately, several methods have been developed to detect noneffortful item responses, most of which use response times. To accurately identify noneffortful responses, one must set response time thresholds separating those responses from effortful ones. While other studies…
Descriptors: Reaction Time, Measurement, Response Style (Tests), Reading Tests
Wang, Chao; Lu, Hong – Educational Technology & Society, 2018
This study focused on the effect of examinees' ability levels on the relationship between Reflective-Impulsive (RI) cognitive style and item response time in computerized adaptive testing (CAT). The total of 56 students majoring in Educational Technology from Shandong Normal University participated in this study, and their RI cognitive styles were…
Descriptors: Item Response Theory, Computer Assisted Testing, Cognitive Style, Correlation
Qian, Hong; Staniewska, Dorota; Reckase, Mark; Woo, Ada – Educational Measurement: Issues and Practice, 2016
This article addresses the issue of how to detect item preknowledge using item response time data in two computer-based large-scale licensure examinations. Item preknowledge is indicated by an unexpected short response time and a correct response. Two samples were used for detecting item preknowledge for each examination. The first sample was from…
Descriptors: Reaction Time, Licensing Examinations (Professions), Computer Assisted Testing, Prior Learning