Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 12 |
Descriptor
Source
Journal of Educational… | 35 |
Author
Madaus, George F. | 2 |
Sinharay, Sandip | 2 |
Airasian, Peter W. | 1 |
Almond, Russell G. | 1 |
Armstrong, Ronald D. | 1 |
Baker, Eva L. | 1 |
Baldwin, Su G. | 1 |
Biggs, J. B. | 1 |
Braun, P. H. | 1 |
Brennan, Robert L. | 1 |
Bridgeford, Nancy J. | 1 |
More ▼ |
Publication Type
Journal Articles | 25 |
Reports - Research | 12 |
Reports - Evaluative | 5 |
Reports - Descriptive | 4 |
Information Analyses | 3 |
Opinion Papers | 2 |
Speeches/Meeting Papers | 2 |
Education Level
Elementary Secondary Education | 2 |
Secondary Education | 1 |
Audience
Location
New Jersey | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement… | 1 |
Bruininks Oseretsky Test of… | 1 |
Sequential Tests of… | 1 |
System of Multicultural… | 1 |
What Works Clearinghouse Rating
Li, Dongmei – Journal of Educational Measurement, 2022
Equating error is usually small relative to the magnitude of measurement error, but it could be one of the major sources of error contributing to mean scores of large groups in educational measurement, such as the year-to-year state mean score fluctuations. Though testing programs may routinely calculate the standard error of equating (SEE), the…
Descriptors: Error Patterns, Educational Testing, Group Testing, Statistical Analysis
Sinharay, Sandip – Journal of Educational Measurement, 2023
Technical difficulties and other unforeseen events occasionally lead to incomplete data on educational tests, which necessitates the reporting of imputed scores to some examinees. While there exist several approaches for reporting imputed scores, there is a lack of any guidance on the reporting of the uncertainty of imputed scores. In this paper,…
Descriptors: Evaluation Methods, Scores, Standardized Tests, Simulation
Hong, Seong Eun; Monroe, Scott; Falk, Carl F. – Journal of Educational Measurement, 2020
In educational and psychological measurement, a person-fit statistic (PFS) is designed to identify aberrant response patterns. For parametric PFSs, valid inference depends on several assumptions, one of which is that the item response theory (IRT) model is correctly specified. Previous studies have used empirical data sets to explore the effects…
Descriptors: Educational Testing, Psychological Testing, Goodness of Fit, Error of Measurement
Sinharay, Sandip – Journal of Educational Measurement, 2018
Response-time models are of increasing interest in educational and psychological testing. This article focuses on the lognormal model for response times, which is one of the most popular response-time models, and suggests a simple person-fit statistic for the model. The distribution of the statistic under the null hypothesis of no misfit is proved…
Descriptors: Reaction Time, Educational Testing, Psychological Testing, Models
Veldkamp, Bernard P. – Journal of Educational Measurement, 2016
Many standardized tests are now administered via computer rather than paper-and-pencil format. The computer-based delivery mode brings with it certain advantages. One advantage is the ability to adapt the difficulty level of the test to the ability level of the test taker in what has been termed computerized adaptive testing (CAT). A second…
Descriptors: Computer Assisted Testing, Reaction Time, Standardized Tests, Difficulty Level
Kim, Sooyeon; Walker, Michael E.; McHale, Frederick – Journal of Educational Measurement, 2010
In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…
Descriptors: Measures (Individuals), Scoring, Equated Scores, Test Bias
Kang, Taehoon; Chen, Troy T. – Journal of Educational Measurement, 2008
Orlando and Thissen's S-X[superscript 2] item fit index has performed better than traditional item fit statistics such as Yen' s Q[subscript 1] and McKinley and Mill' s G[superscript 2] for dichotomous item response theory (IRT) models. This study extends the utility of S-X[superscript 2] to polytomous IRT models, including the generalized partial…
Descriptors: Item Response Theory, Models, Rating Scales, Generalization
Myford, Carol M.; Wolfe, Edward W. – Journal of Educational Measurement, 2009
In this study, we describe a framework for monitoring rater performance over time. We present several statistical indices to identify raters whose standards drift and explain how to use those indices operationally. To illustrate the use of the framework, we analyzed rating data from the 2002 Advanced Placement English Literature and Composition…
Descriptors: English Literature, Advanced Placement, Measures (Individuals), Writing (Composition)
Clauser, Brian E.; Mee, Janet; Baldwin, Su G.; Margolis, Melissa J.; Dillon, Gerard F. – Journal of Educational Measurement, 2009
Although the Angoff procedure is among the most widely used standard setting procedures for tests comprising multiple-choice items, research has shown that subject matter experts have considerable difficulty accurately making the required judgments in the absence of examinee performance data. Some authors have viewed the need to provide…
Descriptors: Standard Setting (Scoring), Program Effectiveness, Expertise, Health Personnel
Cui, Ying; Leighton, Jacqueline P. – Journal of Educational Measurement, 2009
In this article, we introduce a person-fit statistic called the hierarchy consistency index (HCI) to help detect misfitting item response vectors for tests developed and analyzed based on a cognitive model. The HCI ranges from -1.0 to 1.0, with values close to -1.0 indicating that students respond unexpectedly or differently from the responses…
Descriptors: Test Length, Simulation, Correlation, Research Methodology

Trentham, Landa L. – Journal of Educational Measurement, 1975
Descriptors: Comparative Testing, Educational Testing, Elementary Education, Grade 6
Armstrong, Ronald D.; Shi, Min – Journal of Educational Measurement, 2009
This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…
Descriptors: Probability, Simulation, Models, Psychometrics
Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego – Journal of Educational Measurement, 2007
This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…
Descriptors: Inferences, Models, Item Response Theory, Cognitive Measurement

Lennon, Roger T. – Journal of Educational Measurement, 1975
Reviews the 1974 Standards, an updating serving as a guide to test making and publishing, and training of persons for these endeavors. (DEP)
Descriptors: Educational Testing, Psychological Testing, Scoring, Standards

Wang, Tianyou; Kolen, Michael J. – Journal of Educational Measurement, 2001
Reviews research literature on comparability issues in computerized adaptive testing (CAT) and synthesizes issues specific to comparability and test security. Develops a framework for evaluating comparability that contains three categories of criteria: (1) validity; (2) psychometric property/reliability; and (3) statistical assumption/test…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Criteria