Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 11 |
Descriptor
Difficulty Level | 18 |
Test Items | 14 |
Models | 13 |
Item Response Theory | 11 |
Test Bias | 5 |
Comparative Analysis | 4 |
Mathematical Models | 4 |
Foreign Countries | 3 |
Mathematics Tests | 3 |
Multiple Choice Tests | 3 |
Probability | 3 |
More ▼ |
Source
Journal of Educational… | 18 |
Author
Jin, Kuan-Yu | 2 |
Airasian, Peter W. | 1 |
Andrich, David | 1 |
Bart, William M. | 1 |
Beretvas, S. Natasha | 1 |
Cohen, Allan | 1 |
Daniel M. Bolt | 1 |
De Boeck, Paul | 1 |
DeCarlo, Lawrence T. | 1 |
DeMars, Christine E. | 1 |
Debeer, Dries | 1 |
More ▼ |
Publication Type
Journal Articles | 16 |
Reports - Research | 10 |
Reports - Descriptive | 3 |
Reports - Evaluative | 3 |
Education Level
Secondary Education | 2 |
Grade 8 | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
Early Childhood Longitudinal… | 1 |
What Works Clearinghouse Rating
Xiangyi Liao; Daniel M. Bolt; Jee-Seon Kim – Journal of Educational Measurement, 2024
Item difficulty and dimensionality often correlate, implying that unidimensional IRT approximations to multidimensional data (i.e., reference composites) can take a curvilinear form in the multidimensional space. Although this issue has been previously discussed in the context of vertical scaling applications, we illustrate how such a phenomenon…
Descriptors: Difficulty Level, Simulation, Multidimensional Scaling, Graphs
Jin, Kuan-Yu; Siu, Wai-Lok; Huang, Xiaoting – Journal of Educational Measurement, 2022
Multiple-choice (MC) items are widely used in educational tests. Distractor analysis, an important procedure for checking the utility of response options within an MC item, can be readily implemented in the framework of item response theory (IRT). Although random guessing is a popular behavior of test-takers when answering MC items, none of the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Item Response Theory, Attention
DeCarlo, Lawrence T. – Journal of Educational Measurement, 2021
In a signal detection theory (SDT) approach to multiple choice exams, examinees are viewed as choosing, for each item, the alternative that is perceived as being the most plausible, with perceived plausibility depending in part on whether or not an item is known. The SDT model is a process model and provides measures of item difficulty, item…
Descriptors: Perception, Bias, Theories, Test Items
Andrich, David; Marais, Ida – Journal of Educational Measurement, 2018
Even though guessing biases difficulty estimates as a function of item difficulty in the dichotomous Rasch model, assessment programs with tests which include multiple-choice items often construct scales using this model. Research has shown that when all items are multiple-choice, this bias can largely be eliminated. However, many assessments have…
Descriptors: Multiple Choice Tests, Test Items, Guessing (Tests), Test Bias
Jin, Kuan-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…
Descriptors: Student Evaluation, Item Response Theory, Models, Simulation
Li, Feiming; Cohen, Allan; Shen, Linjun – Journal of Educational Measurement, 2012
Computer-based tests (CBTs) often use random ordering of items in order to minimize item exposure and reduce the potential for answer copying. Little research has been done, however, to examine item position effects for these tests. In this study, different versions of a Rasch model and different response time models were examined and applied to…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Models
Jiao, Hong; Wang, Shudong; He, Wei – Journal of Educational Measurement, 2013
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Descriptors: Computation, Item Response Theory, Models, Monte Carlo Methods
Debeer, Dries; Janssen, Rianne – Journal of Educational Measurement, 2013
Changing the order of items between alternate test forms to prevent copying and to enhance test security is a common practice in achievement testing. However, these changes in item order may affect item and test characteristics. Several procedures have been proposed for studying these item-order effects. The present study explores the use of…
Descriptors: Item Response Theory, Test Items, Test Format, Models
Frederickx, Sofie; Tuerlinckx, Francis; De Boeck, Paul; Magis, David – Journal of Educational Measurement, 2010
In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is assumed for the item difficulties such that the…
Descriptors: Test Bias, Models, Test Items, Difficulty Level
Muckle, Timothy J.; Karabatsos, George – Journal of Educational Measurement, 2009
It is known that the Rasch model is a special two-level hierarchical generalized linear model (HGLM). This article demonstrates that the many-faceted Rasch model (MFRM) is also a special case of the two-level HGLM, with a random intercept representing examinee ability on a test, and fixed effects for the test items, judges, and possibly other…
Descriptors: Test Items, Item Response Theory, Models, Regression (Statistics)
DeMars, Christine E. – Journal of Educational Measurement, 2006
Four item response theory (IRT) models were compared using data from tests where multiple items were grouped into testlets focused on a common stimulus. In the bi-factor model each item was treated as a function of a primary trait plus a nuisance trait due to the testlet; in the testlet-effects model the slopes in the direction of the testlet…
Descriptors: Item Response Theory, Reliability, Item Analysis, Factor Analysis

Airasian, Peter W.; Bart, William M. – Journal of Educational Measurement, 1975
Validation studies of learning hierarchies usually examine whether task relationships posited a priori are confirmed by student learning data. This method was compared with a non-posited task relationship where all possible task relationships were generated and investigated. A learning hierarchy in a seventh grade mathematics study reported by…
Descriptors: Difficulty Level, Intellectual Development, Junior High Schools, Learning Theories

Kelderman, Henk; Macready, George B. – Journal of Educational Measurement, 1990
Loglinear latent class models are used to detect differential item functioning (DIF). Likelihood ratio tests for assessing the presence of various types of DIF are described, and these methods are illustrated through the analysis of a "real world" data set. (TJH)
Descriptors: Difficulty Level, Equations (Mathematics), Item Bias, Item Response Theory

Spray, Judith A.; Welch, Catherine J. – Journal of Educational Measurement, 1990
The effect of large, within-examinee item difficulty variability on estimates of the proportion of consistent classification of examinees into mastery categories was studied over 2 test administrations for 100 simulated examinees. The proportion of consistent classifications was adequately estimated using the technique proposed by M. Subkoviak…
Descriptors: Classification, Difficulty Level, Estimation (Mathematics), Item Response Theory
Beretvas, S. Natasha; Williams, Natasha J. – Journal of Educational Measurement, 2004
To assess item dimensionality, the following two approaches are described and compared: hierarchical generalized linear model (HGLM) and multidimensional item response theory (MIRT) model. Two generating models are used to simulate dichotomous responses to a 17-item test: the unidimensional and compensatory two-dimensional (C2D) models. For C2D…
Descriptors: Item Response Theory, Test Items, Mathematics Tests, Reading Ability
Previous Page | Next Page ยป
Pages: 1 | 2