Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 5 |
Descriptor
Error Patterns | 5 |
Item Response Theory | 5 |
Monte Carlo Methods | 5 |
Simulation | 3 |
Test Items | 3 |
Test Length | 3 |
Evaluation Methods | 2 |
Goodness of Fit | 2 |
Sample Size | 2 |
Computation | 1 |
Computer Assisted Testing | 1 |
More ▼ |
Source
Annenberg Institute for… | 1 |
Applied Measurement in… | 1 |
Educational and Psychological… | 1 |
International Journal of… | 1 |
Journal of Educational… | 1 |
Author
Basman, Munevver | 1 |
Bolt, Daniel M. | 1 |
Drasgow, Fritz | 1 |
James S. Kim | 1 |
Joshua B. Gilbert | 1 |
Luke W. Miratrix | 1 |
Sinharay, Sandip | 1 |
Tay, Louis | 1 |
Wells, Craig S. | 1 |
Publication Type
Journal Articles | 4 |
Reports - Research | 3 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 3 | 1 |
Primary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Basman, Munevver – International Journal of Assessment Tools in Education, 2023
To ensure the validity of the tests is to check that all items have similar results across different groups of individuals. However, differential item functioning (DIF) occurs when the results of individuals with equal ability levels from different groups differ from each other on the same test item. Based on Item Response Theory and Classic Test…
Descriptors: Test Bias, Test Items, Test Validity, Item Response Theory
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Sinharay, Sandip – Journal of Educational Measurement, 2016
De la Torre and Deng suggested a resampling-based approach for person-fit assessment (PFA). The approach involves the use of the [math equation unavailable] statistic, a corrected expected a posteriori estimate of the examinee ability, and the Monte Carlo (MC) resampling method. The Type I error rate of the approach was closer to the nominal level…
Descriptors: Sampling, Research Methodology, Error Patterns, Monte Carlo Methods
Tay, Louis; Drasgow, Fritz – Educational and Psychological Measurement, 2012
Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…
Descriptors: Test Length, Monte Carlo Methods, Goodness of Fit, Item Response Theory
Wells, Craig S.; Bolt, Daniel M. – Applied Measurement in Education, 2008
Tests of model misfit are often performed to validate the use of a particular model in item response theory. Douglas and Cohen (2001) introduced a general nonparametric approach for detecting misfit under the two-parameter logistic model. However, the statistical properties of their approach, and empirical comparisons to other methods, have not…
Descriptors: Test Length, Test Items, Monte Carlo Methods, Nonparametric Statistics