Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 2 |
Descriptor
| Simulation | 5 |
| Comparative Analysis | 3 |
| Item Response Theory | 3 |
| Accuracy | 2 |
| Methods | 2 |
| Scoring | 2 |
| Tables (Data) | 2 |
| Test Items | 2 |
| Ability | 1 |
| Adaptive Testing | 1 |
| Bayesian Statistics | 1 |
| More ▼ | |
Author
| Baker, Eva L. | 1 |
| Chen, Hanwei | 1 |
| Chung, Gregory K. W. K. | 1 |
| Cui, Zhongmin | 1 |
| DeCarlo, Lawrence T. | 1 |
| Fang, Yu | 1 |
| Glas, Cees A. W. | 1 |
| Topczewski, Anna | 1 |
| Vos, Hans J. | 1 |
| Woodruff, David | 1 |
| Zhang, Jinming | 1 |
| More ▼ | |
Publication Type
| Numerical/Quantitative Data | 5 |
| Reports - Research | 3 |
| Journal Articles | 2 |
| Reports - Evaluative | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Topczewski, Anna; Cui, Zhongmin; Woodruff, David; Chen, Hanwei; Fang, Yu – ACT, Inc., 2013
This paper investigates four methods of linear equating under the common item nonequivalent groups design. Three of the methods are well known: Tucker, Angoff-Levine, and Congeneric-Levine. A fourth method is presented as a variant of the Congeneric-Levine method. Using simulation data generated from the three-parameter logistic IRT model we…
Descriptors: Comparative Analysis, Equated Scores, Methods, Simulation
DeCarlo, Lawrence T. – ETS Research Report Series, 2008
Rater behavior in essay grading can be viewed as a signal-detection task, in that raters attempt to discriminate between latent classes of essays, with the latent classes being defined by a scoring rubric. The present report examines basic aspects of an approach to constructed-response (CR) scoring via a latent-class signal-detection model. The…
Descriptors: Scoring, Responses, Test Format, Bias
Glas, Cees A. W.; Vos, Hans J. – 1998
A version of sequential mastery testing is studied in which response behavior is modeled by an item response theory (IRT) model. First, a general theoretical framework is sketched that is based on a combination of Bayesian sequential decision theory and item response theory. A discussion follows on how IRT based sequential mastery testing can be…
Descriptors: Adaptive Testing, Bayesian Statistics, Item Response Theory, Mastery Tests
Zhang, Jinming – ETS Research Report Series, 2005
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
Descriptors: Statistical Bias, Maximum Likelihood Statistics, Computation, Ability
Chung, Gregory K. W. K.; Baker, Eva L. – 1997
This report documents the technology initiatives of the Center for Research on Evaluation, Standards, and Student Testing (CRESST) in two broad areas: (1) using technology to improve the quality, utility, and feasibility of existing measures; and (2) using technology to design and develop new assessments and measurement approaches available…
Descriptors: Computer Assisted Testing, Constructed Response, Educational Planning, Educational Technology

Peer reviewed
