Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 20 |
Descriptor
Models | 22 |
Item Response Theory | 14 |
Test Items | 7 |
Measurement Techniques | 6 |
Bayesian Statistics | 4 |
Bias | 3 |
Comparative Analysis | 3 |
Difficulty Level | 3 |
Inferences | 3 |
Mathematical Models | 3 |
Measures (Individuals) | 3 |
More ▼ |
Source
Journal of Educational… | 25 |
Author
Publication Type
Journal Articles | 25 |
Reports - Descriptive | 25 |
Education Level
Secondary Education | 2 |
Audience
Location
Laws, Policies, & Programs
Race to the Top | 1 |
Assessments and Surveys
Program for International… | 2 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Journal of Educational Measurement, 2024
This paper presents the item and test information functions of the Rank two-parameter logistic models (Rank-2PLM) for items with two (pair) and three (triplet) statements in forced-choice questionnaires. The Rank-2PLM model for pairs is the MUPP-2PLM (Multi-Unidimensional Pairwise Preference) and, for triplets, is the Triplet-2PLM. Fisher's…
Descriptors: Questionnaires, Test Items, Item Response Theory, Models
DeCarlo, Lawrence T.; Zhou, Xiaoliang – Journal of Educational Measurement, 2021
In signal detection rater models for constructed response (CR) scoring, it is assumed that raters discriminate equally well between different latent classes defined by the scoring rubric. An extended model that relaxes this assumption is introduced; the model recognizes that a rater may not discriminate equally well between some of the scoring…
Descriptors: Scoring, Models, Bias, Perception
Henninger, Mirka – Journal of Educational Measurement, 2021
Item Response Theory models with varying thresholds are essential tools to account for unknown types of response tendencies in rating data. However, in order to separate constructs to be measured and response tendencies, specific constraints have to be imposed on varying thresholds and their interrelations. In this article, a multidimensional…
Descriptors: Response Style (Tests), Item Response Theory, Models, Computation
Ranger, Jochen; Kuhn, Jörg-Tobias; Wolgast, Anett – Journal of Educational Measurement, 2021
Van der Linden's hierarchical model for responses and response times can be used in order to infer the ability and mental speed of test takers from their responses and response times in an educational test. A standard approach for this is maximum likelihood estimation. In real-world applications, the data of some test takers might be partly…
Descriptors: Models, Reaction Time, Item Response Theory, Tests
Wesolowski, Brian C.; Wind, Stefanie A. – Journal of Educational Measurement, 2019
Rater-mediated assessments are a common methodology for measuring persons, investigating rater behavior, and/or defining latent constructs. The purpose of this article is to provide a pedagogical framework for examining rater variability in the context of rater-mediated assessments using three distinct models. The first model is the observation…
Descriptors: Interrater Reliability, Models, Observation, Measurement
DeCarlo, Lawrence T. – Journal of Educational Measurement, 2021
In a signal detection theory (SDT) approach to multiple choice exams, examinees are viewed as choosing, for each item, the alternative that is perceived as being the most plausible, with perceived plausibility depending in part on whether or not an item is known. The SDT model is a process model and provides measures of item difficulty, item…
Descriptors: Perception, Bias, Theories, Test Items
Fujimoto, Ken A. – Journal of Educational Measurement, 2020
Multilevel bifactor item response theory (IRT) models are commonly used to account for features of the data that are related to the sampling and measurement processes used to gather those data. These models conventionally make assumptions about the portions of the data structure that represent these features. Unfortunately, when data violate these…
Descriptors: Bayesian Statistics, Item Response Theory, Achievement Tests, Secondary School Students
Brinkhuis, Matthieu J. S.; Bakker, Marjan; Maris, Gunter – Journal of Educational Measurement, 2015
The amount of data available in the context of educational measurement has vastly increased in recent years. Such data are often incomplete, involve tests administered at different time points and during the course of many years, and can therefore be quite challenging to model. In addition, intermediate results like grades or report cards being…
Descriptors: Educational Assessment, Measures (Individuals), Data, Robustness (Statistics)
Wiberg, Marie; González, Jorge – Journal of Educational Measurement, 2016
Equating methods make use of an appropriate transformation function to map the scores of one test form into the scale of another so that scores are comparable and can be used interchangeably. The equating literature shows that the ways of judging the success of an equating (i.e., the score transformation) might differ depending on the adopted…
Descriptors: Statistical Analysis, Equated Scores, Scores, Models
Mislevy, Robert J.; Zwick, Rebecca – Journal of Educational Measurement, 2012
A new entry in the testing lexicon is through-course summative assessment, a system consisting of components administered periodically during the academic year. As defined in the Race to the Top program, these assessments are intended to yield a yearly summative score for accountability purposes. They must provide for both individual and group…
Descriptors: National Competency Tests, Inferences, Item Response Theory, Summative Evaluation
Lee, Won-Chan – Journal of Educational Measurement, 2010
In this article, procedures are described for estimating single-administration classification consistency and accuracy indices for complex assessments using item response theory (IRT). This IRT approach was applied to real test data comprising dichotomous and polytomous items. Several different IRT model combinations were considered. Comparisons…
Descriptors: Classification, Item Response Theory, Comparative Analysis, Models
van der Linden, Wim J. – Journal of Educational Measurement, 2010
Although response times on test items are recorded on a natural scale, the scale for some of the parameters in the lognormal response-time model (van der Linden, 2006) is not fixed. As a result, when the model is used to periodically calibrate new items in a testing program, the parameter are not automatically mapped onto a common scale. Several…
Descriptors: Test Items, Testing Programs, Measures (Individuals), Item Response Theory
Frederickx, Sofie; Tuerlinckx, Francis; De Boeck, Paul; Magis, David – Journal of Educational Measurement, 2010
In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is assumed for the item difficulties such that the…
Descriptors: Test Bias, Models, Test Items, Difficulty Level
de la Torre, Jimmy; Lee, Young-Sun – Journal of Educational Measurement, 2010
Cognitive diagnosis models (CDMs), as alternative approaches to unidimensional item response models, have received increasing attention in recent years. CDMs are developed for the purpose of identifying the mastery or nonmastery of multiple fine-grained attributes or skills required for solving problems in a domain. For CDMs to receive wider use,…
Descriptors: Ability Grouping, Item Response Theory, Models, Problem Solving
Muckle, Timothy J.; Karabatsos, George – Journal of Educational Measurement, 2009
It is known that the Rasch model is a special two-level hierarchical generalized linear model (HGLM). This article demonstrates that the many-faceted Rasch model (MFRM) is also a special case of the two-level HGLM, with a random intercept representing examinee ability on a test, and fixed effects for the test items, judges, and possibly other…
Descriptors: Test Items, Item Response Theory, Models, Regression (Statistics)
Previous Page | Next Page »
Pages: 1 | 2