Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 2 |
Descriptor
Source
Journal of Educational… | 3 |
Educational and Psychological… | 2 |
Grantee Submission | 1 |
International Educational… | 1 |
Online Submission | 1 |
Author
Publication Type
Speeches/Meeting Papers | 39 |
Reports - Evaluative | 22 |
Reports - Research | 17 |
Journal Articles | 5 |
Numerical/Quantitative Data | 1 |
Education Level
Elementary Education | 1 |
Higher Education | 1 |
Secondary Education | 1 |
Audience
Researchers | 11 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Cari F. Herrmann-Abell; George E. DeBoer – Grantee Submission, 2023
This study describes the role that Rasch measurement played in the development of assessments aligned to the "Next Generation Science Standards," tasks that require students to use the three dimensions of science practices, disciplinary core ideas and cross-cutting concepts to make sense of energy-related phenomena. A set of 27…
Descriptors: Item Response Theory, Computer Simulation, Science Tests, Energy
Chen, Binglin; West, Matthew; Ziles, Craig – International Educational Data Mining Society, 2018
This paper attempts to quantify the accuracy limit of "nextitem-correct" prediction by using numerical optimization to estimate the student's probability of getting each question correct given a complete sequence of item responses. This optimization is performed without an explicit parameterized model of student behavior, but with the…
Descriptors: Accuracy, Probability, Student Behavior, Test Items

Parshall, Cynthia G.; Miller, Timothy R. – Journal of Educational Measurement, 1995
Exact testing was evaluated as a method for conducting Mantel-Haenszel differential item functioning (DIF) analyses with relatively small samples. A series of computer simulations found that the asymptotic Mantel-Haenszel and the exact method yielded very similar results across sample size, levels of DIF, and data sets. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Identification, Item Bias
Ackerman, Terry A. – 1987
Concern has been expressed over the item response theory (IRT) assumption that a person's ability can be estimated in a unidimensional latent space. To examine whether or not the response to an item requires only a single latent ability, unidimensional ability estimates were compared for data generated from the multidimensional item response…
Descriptors: Ability, Computer Simulation, Difficulty Level, Item Analysis
Flowers, Claudia P.; And Others – 1997
An item response theory-based parametric procedure proposed by N. S. Raju, W. J. van der Linden, and P. F. Fleer (1995) known as differential functioning of items and tests (DFIT) can be used with unidimensional and multidimensional data with dichotomous or polytomous scoring. This study describes the polytomous DFIT framework and evaluates and…
Descriptors: Chi Square, Computer Simulation, Item Bias, Item Response Theory
Mazor, Kathleen M.; And Others – 1993
The Mantel-Haenszel (MH) procedure has become one of the most popular procedures for detecting differential item functioning (DIF). One of the most troublesome criticisms of this procedure is that while detection rates for uniform DIF are very good, the procedure is not sensitive to non-uniform DIF. In this study, examinee responses were generated…
Descriptors: Comparative Testing, Computer Simulation, Item Bias, Item Response Theory
Lecointe, Darius A. – 1995
The purpose of this Item Response Theory study was to investigate how the expected reduction in item information, due to the collapsing of response categories in performance assessment data, was affected by varying testing conditions: item difficulty, item discrimination, inter-rater reliability, and direction of collapsing. The investigation used…
Descriptors: Classification, Computer Simulation, Difficulty Level, Interrater Reliability
Ackerman, Terry A.; Evans, John A. – 1993
A didactic example is provided, using a Monte Carlo method, of how differential item functioning (DIF) can be eliminated (and thus better understood) when the complete latent space is used. The main source of DIF is that the matching single criterion used in some DIF procedures, Mantel Haenszel or Simultaneous Item Bias (SIBTEST), does not account…
Descriptors: Computer Simulation, Equations (Mathematics), Item Bias, Item Response Theory

Smith, Richard M. – Educational and Psychological Measurement, 1991
This study reports results of an investigation based on simulated data of the distributional properties of the item fit statistics that are commonly used in the Rasch model calibration programs as indices of the fit of responses to individual items to the measurement model. (SLD)
Descriptors: Computer Simulation, Equations (Mathematics), Goodness of Fit, Item Response Theory
Kim, Seock-Ho; Cohen, Allan S. – 1997
Type I error rates of the likelihood ratio test for the detection of differential item functioning (DIF) were investigated using Monte Carlo simulations. The graded response model with five ordered categories was used to generate data sets of a 30-item test for samples of 300 and 1,000 simulated examinees. All DIF comparisons were simulated by…
Descriptors: Ability, Classification, Computer Simulation, Estimation (Mathematics)
Nandakumar, Ratna – 1994
By definition, differential item functioning (DIF) refers to unequal probabilities of a correct response to a test item by examinees from two groups when controlled for their ability differences. Simulation results are presented for an attempt to purify a test by separating out multidimensional items under the assumption that the intent of the…
Descriptors: Ability, Computer Simulation, Construct Validity, Educational Assessment
Hwang, Chi-en; Cleary, T. Anne – 1986
The results obtained from two basic types of pre-equatings of tests were compared: the item response theory (IRT) pre-equating and section pre-equating (SPE). The simulated data were generated from a modified three-parameter logistic model with a constant guessing parameter. Responses of two replication samples of 3000 examinees on two 72-item…
Descriptors: Computer Simulation, Equated Scores, Latent Trait Theory, Mathematical Models

Hambleton, Ronald K.; And Others – Journal of Educational Measurement, 1993
Item parameter estimation errors in test development are highlighted. The problem is illustrated with several simulated data sets, and a conservative solution is offered for addressing the problem in item response theory test development practice. Steps that reduce the problem of capitalizing on chance in item selections are suggested. (SLD)
Descriptors: Computer Simulation, Error of Measurement, Estimation (Mathematics), Item Banks
Hambleton, Ronald K.; Jones, Russell W. – 1993
Errors in item parameter estimates have a negative impact on the accuracy of item and test information functions. The estimation errors may be random, but because items with higher levels of discriminating power are more likely to be selected for a test, and these items are most apt to contain positive errors, the result is that item information…
Descriptors: Computer Simulation, Error of Measurement, Estimation (Mathematics), Item Banks
Reinhardt, Brian M. – 1991
Factors affecting a lower-bound estimate of internal consistency reliability, Cronbach's coefficient alpha, are explored. Theoretically, coefficient alpha is an estimate of the correlation between two tests drawn at random from a pool of items like the items in the test under consideration. As a practical matter, coefficient alpha can be an index…
Descriptors: Computer Simulation, Correlation, Difficulty Level, Estimation (Mathematics)