Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 2 |
| Since 2007 (last 20 years) | 4 |
Descriptor
| Comparative Analysis | 6 |
| Computer Assisted Testing | 6 |
| Sample Size | 6 |
| Simulation | 4 |
| Test Items | 4 |
| Adaptive Testing | 3 |
| Correlation | 2 |
| Item Analysis | 2 |
| Pretests Posttests | 2 |
| Scores | 2 |
| Test Format | 2 |
| More ▼ | |
Source
| ETS Research Report Series | 1 |
| Educational and Psychological… | 1 |
| International Journal of… | 1 |
| Journal of Educational… | 1 |
| Quality Assurance in… | 1 |
Author
| Ban, Jae-Chun | 1 |
| Breyer, F. Jay | 1 |
| Brooks, Thomas | 1 |
| Chen, Shu-Ying | 1 |
| Cikrikci, Rahime Nukhet | 1 |
| Cokluk Bokeoglu, Omay | 1 |
| Hanson, Bradley A. | 1 |
| Harris, Deborah J. | 1 |
| Jiao, Hong | 1 |
| Kárász, Judit T. | 1 |
| Lei, Pui-Wa | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 5 |
| Reports - Research | 4 |
| Reports - Evaluative | 2 |
| Tests/Questionnaires | 1 |
Education Level
| Elementary Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sahin Kursad, Merve; Cokluk Bokeoglu, Omay; Cikrikci, Rahime Nukhet – International Journal of Assessment Tools in Education, 2022
Item parameter drift (IPD) is the systematic differentiation of parameter values of items over time due to various reasons. If it occurs in computer adaptive tests (CAT), it causes errors in the estimation of item and ability parameters. Identification of the underlying conditions of this situation in CAT is important for estimating item and…
Descriptors: Item Analysis, Computer Assisted Testing, Test Items, Error of Measurement
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Wang, Shudong; Jiao, Hong; Young, Michael J.; Brooks, Thomas; Olson, John – Educational and Psychological Measurement, 2008
In recent years, computer-based testing (CBT) has grown in popularity, is increasingly being implemented across the United States, and will likely become the primary mode for delivering tests in the future. Although CBT offers many advantages over traditional paper-and-pencil testing, assessment experts, researchers, practitioners, and users have…
Descriptors: Elementary Secondary Education, Reading Achievement, Computer Assisted Testing, Comparative Analysis
Ban, Jae-Chun; Hanson, Bradley A.; Wang, Tianyou; Yi, Qing; Harris, Deborah J. – 2000
The purpose of this study was to compare and evaluate five online pretest item calibration/scaling methods in computerized adaptive testing (CAT): (1) the marginal maximum likelihood estimate with one-EM cycle (OEM); (2) the marginal maximum likelihood estimate with multiple EM cycles (MEM); (3) Stocking's Method A (M. Stocking, 1988); (4)…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Lei, Pui-Wa; Chen, Shu-Ying; Yu, Lan – Journal of Educational Measurement, 2006
Mantel-Haenszel and SIBTEST, which have known difficulty in detecting non-unidirectional differential item functioning (DIF), have been adapted with some success for computerized adaptive testing (CAT). This study adapts logistic regression (LR) and the item-response-theory-likelihood-ratio test (IRT-LRT), capable of detecting both unidirectional…
Descriptors: Evaluation Methods, Test Bias, Computer Assisted Testing, Multiple Regression Analysis

Peer reviewed
Direct link
