NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Xi; Liu, Yang – Journal of Educational and Behavioral Statistics, 2020
In continuous testing programs, some items are repeatedly used across test administrations, and statistical methods are often used to evaluate whether items become compromised due to examinees' preknowledge. In this study, we proposed a residual method to detect compromised items when a test can be partitioned into two subsets of items: secure…
Descriptors: Test Items, Information Security, Error of Measurement, Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Gilbert, Joshua B.; Kim, James S.; Miratrix, Luke W. – Journal of Educational and Behavioral Statistics, 2023
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing heterogeneous treatment effects (HTE) fail to address the HTE that may exist "within" outcome measures. In…
Descriptors: Test Items, Item Response Theory, Computer Assisted Testing, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Barrett, Michelle D.; van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2019
Parameter linking in item response theory is generally necessary to adjust for differences between the true values for the same item and ability parameters due to the use of different identifiability restrictions in different calibrations. The research reported in this article explores a precision-weighted (PW) approach to the problem of…
Descriptors: Item Response Theory, Computation, Error of Measurement, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Patton, Jeffrey M.; Cheng, Ying; Hong, Maxwell; Diao, Qi – Journal of Educational and Behavioral Statistics, 2019
In psychological and survey research, the prevalence and serious consequences of careless responses from unmotivated participants are well known. In this study, we propose to iteratively detect careless responders and cleanse the data by removing their responses. The careless responders are detected using person-fit statistics. In two simulation…
Descriptors: Test Items, Response Style (Tests), Identification, Computation
Oranje, Andreas; Kolstad, Andrew – Journal of Educational and Behavioral Statistics, 2019
The design and psychometric methodology of the National Assessment of Educational Progress (NAEP) is constantly evolving to meet the changing interests and demands stemming from a rapidly shifting educational landscape. NAEP has been built on strong research foundations that include conducting extensive evaluations and comparisons before new…
Descriptors: National Competency Tests, Psychometrics, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Magnus, Brooke E.; Thissen, David – Journal of Educational and Behavioral Statistics, 2017
Questionnaires that include items eliciting count responses are becoming increasingly common in psychology. This study proposes methodological techniques to overcome some of the challenges associated with analyzing multivariate item response data that exhibit zero inflation, maximum inflation, and heaping at preferred digits. The modeling…
Descriptors: Item Response Theory, Models, Multivariate Analysis, Questionnaires
Peer reviewed Peer reviewed
Direct linkDirect link
Thissen, David – Journal of Educational and Behavioral Statistics, 2016
David Thissen, a professor in the Department of Psychology and Neuroscience, Quantitative Program at the University of North Carolina, has consulted and served on technical advisory committees for assessment programs that use item response theory (IRT) over the past couple decades. He has come to the conclusion that there are usually two purposes…
Descriptors: Item Response Theory, Test Construction, Testing Problems, Student Evaluation
Peer reviewed Peer reviewed
Zwick, Rebecca; Thayer, Dorothy T. – Journal of Educational and Behavioral Statistics, 1996
Two possible standard error formulas for the polytomous differential item functioning index proposed by N. J. Dorans and A. P. Schmitt (1991) were derived. These standard errors, and associated hypothesis-testing procedures, were evaluated through simulated data. The standard error that performed better is based on N. Mantel's (1963)…
Descriptors: Error of Measurement, Evaluation Methods, Hypothesis Testing, Item Bias
Peer reviewed Peer reviewed
Liou, Michelle; Cheng, Philip E. – Journal of Educational and Behavioral Statistics, 1995
Simplified formulas are proposed for computing the standard errors of equipercentile equating for continuous and discrete test scores. These formulas are easily extended to more complicated equating designs. Results from a study of 719 subjects taking an English test indicated that the formulas work reasonably well for moderate-size samples. (SLD)
Descriptors: College Students, Equated Scores, Equations (Mathematics), Error of Measurement