NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Ziying Li; A. Corinne Huggins-Manley; Walter L. Leite; M. David Miller; Eric A. Wright – Educational and Psychological Measurement, 2022
The unstructured multiple-attempt (MA) item response data in virtual learning environments (VLEs) are often from student-selected assessment data sets, which include missing data, single-attempt responses, multiple-attempt responses, and unknown growth ability across attempts, leading to a complex and complicated scenario for using this kind of…
Descriptors: Sequential Approach, Item Response Theory, Data, Simulation
Soysal, Sümeyra; Arikan, Çigdem Akin; Inal, Hatice – Online Submission, 2016
This study aims to investigate the effect of methods to deal with missing data on item difficulty estimations under different test length conditions and sampling sizes. In this line, a data set including 10, 20 and 40 items with 100 and 5000 sampling size was prepared. Deletion process was applied at the rates of 5%, 10% and 20% under conditions…
Descriptors: Research Problems, Data Analysis, Item Response Theory, Test Items
Custer, Michael – Online Submission, 2015
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
Descriptors: Sample Size, Item Response Theory, Computation, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Tongyun; Jiao, Hong; Macready, George B. – Educational and Psychological Measurement, 2016
The present study investigates different approaches to adding covariates and the impact in fitting mixture item response theory models. Mixture item response theory models serve as an important methodology for tackling several psychometric issues in test development, including the detection of latent differential item functioning. A Monte Carlo…
Descriptors: Item Response Theory, Psychometrics, Test Construction, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Levy, Roy – Educational Psychologist, 2016
In this article, I provide a conceptually oriented overview of Bayesian approaches to statistical inference and contrast them with frequentist approaches that currently dominate conventional practice in educational research. The features and advantages of Bayesian approaches are illustrated with examples spanning several statistical modeling…
Descriptors: Bayesian Statistics, Models, Educational Research, Innovation
Peer reviewed Peer reviewed
Direct linkDirect link
Deygers, Bart; Van Gorp, Koen – Language Testing, 2015
Considering scoring validity as encompassing both reliable rating scale use and valid descriptor interpretation, this study reports on the validation of a CEFR-based scale that was co-constructed and used by novice raters. The research questions this paper wishes to answer are (a) whether it is possible to construct a CEFR-based rating scale with…
Descriptors: Rating Scales, Scoring, Validity, Interrater Reliability
Scheuneman, Janice Dowd – 1990
The current status of item response theory (IRT) is discussed. Several IRT methods exist for assessing whether an item is biased. Focus is on methods proposed by L. M. Rudner (1975), F. M. Lord (1977), D. Thissen et al. (1988) and R. L. Linn and D. Harnisch (1981). Rudner suggested a measure of the area lying between the two item characteristic…
Descriptors: Chi Square, Error of Measurement, Estimation (Mathematics), Goodness of Fit