Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Source
Educational Testing Service | 1 |
Journal of Computer-Based… | 1 |
Journal of Educational… | 1 |
Journal of Interactive Online… | 1 |
Author
Abedi, Jamal | 1 |
Bruno, James | 1 |
Gialluca, Kathleen A. | 1 |
Haberman, Shelby J. | 1 |
Harrison, David A. | 1 |
Iran-Nejad, Asghar | 1 |
Thoma, Stephen J. | 1 |
Vale, C. David | 1 |
Xu, Yuejin | 1 |
Publication Type
Reports - Research | 4 |
Journal Articles | 3 |
Opinion Papers | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Armed Services Vocational… | 1 |
Defining Issues Test | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Haberman, Shelby J. – Educational Testing Service, 2011
Alternative approaches are discussed for use of e-rater[R] to score the TOEFL iBT[R] Writing test. These approaches involve alternate criteria. In the 1st approach, the predicted variable is the expected rater score of the examinee's 2 essays. In the 2nd approach, the predicted variable is the expected rater score of 2 essay responses by the…
Descriptors: Writing Tests, Scoring, Essays, Language Tests
Xu, Yuejin; Iran-Nejad, Asghar; Thoma, Stephen J. – Journal of Interactive Online Learning, 2007
The purpose of the study was to determine comparability of an online version to the original paper-pencil version of Defining Issues Test 2 (DIT2). This study employed methods from both Classical Test Theory (CTT) and Item Response Theory (IRT). Findings from CTT analyses supported the reliability and discriminant validity of both versions.…
Descriptors: Computer Assisted Testing, Test Format, Comparative Analysis, Test Theory

Harrison, David A. – Journal of Educational Statistics, 1986
Multidimensional item response data were created. The strength of a general factor, the number of common factors, the distribution of items loadingon common factors, and the number of items in simulated tests were manipulated. LOGIST effectively recovered both item and trait parameters in nearly all of the experimental conditions. (Author/JAZ)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Correlation
Abedi, Jamal; Bruno, James – Journal of Computer-Based Instruction, 1989
Reports the results of several test-reliability experiments which compared a modified confidence weighted-admissible probability measurement (MCW-APM) with conventional forced choice or binary type (R-W) test scoring methods. Psychometric properties using G theory and conventional correlational methods are examined, and their implications for…
Descriptors: Ability Grouping, Analysis of Variance, Computer Assisted Testing, Correlation
Vale, C. David; Gialluca, Kathleen A. – 1985
ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Computer Simulation