NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lang, Joseph B. – Journal of Educational and Behavioral Statistics, 2023
This article is concerned with the statistical detection of copying on multiple-choice exams. As an alternative to existing permutation- and model-based copy-detection approaches, a simple randomization p-value (RP) test is proposed. The RP test, which is based on an intuitive match-score statistic, makes no assumptions about the distribution of…
Descriptors: Identification, Cheating, Multiple Choice Tests, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Yu, Albert; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2023
We propose a new item response theory growth model with item-specific learning parameters, or ISLP, and two variations of this model. In the ISLP model, either items or blocks of items have their own learning parameters. This model may be used to improve the efficiency of learning in a formative assessment. We show ways that the ISLP model's…
Descriptors: Item Response Theory, Learning, Markov Processes, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Joshua B. Gilbert; Luke W. Miratrix; Mridul Joshi; Benjamin W. Domingue – Journal of Educational and Behavioral Statistics, 2025
Analyzing heterogeneous treatment effects (HTEs) plays a crucial role in understanding the impacts of educational interventions. A standard practice for HTE analysis is to examine interactions between treatment status and preintervention participant characteristics, such as pretest scores, to identify how different groups respond to treatment.…
Descriptors: Causal Models, Item Response Theory, Statistical Inference, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Liao, Xiangyi; Bolt, Daniel M. – Journal of Educational and Behavioral Statistics, 2021
Four-parameter models have received increasing psychometric attention in recent years, as a reduced upper asymptote for item characteristic curves can be appealing for measurement applications such as adaptive testing and person-fit assessment. However, applications can be challenging due to the large number of parameters in the model. In this…
Descriptors: Test Items, Models, Mathematics Tests, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Longford, Nicholas T. – Journal of Educational and Behavioral Statistics, 2015
An equating procedure for a testing program with evolving distribution of examinee profiles is developed. No anchor is available because the original scoring scheme was based on expert judgment of the item difficulties. Pairs of examinees from two administrations are formed by matching on coarsened propensity scores derived from a set of…
Descriptors: Equated Scores, Testing Programs, College Entrance Examinations, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Andrich, David; Marais, Ida; Humphry, Stephen – Journal of Educational and Behavioral Statistics, 2012
Andersen (1995, 2002) proves a theorem relating variances of parameter estimates from samples and subsamples and shows its use as an adjunct to standard statistical analyses. The authors show an application where the theorem is central to the hypothesis tested, namely, whether random guessing to multiple choice items affects their estimates in the…
Descriptors: Test Items, Item Response Theory, Multiple Choice Tests, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Wainer, Howard; Bradlow, Eric; Wang, Xiaohui – Journal of Educational and Behavioral Statistics, 2010
Confucius pointed out that the first step toward wisdom is calling things by the right name. The term "Differential Item Functioning" (DIF) did not arise fully formed from the miasma of psychometrics, it evolved from a variety of less accurate terms. Among its forebears was "item bias" but that term has a pejorative connotation…
Descriptors: Test Bias, Difficulty Level, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Roussos, Louis A.; Schnipke, Deborah L.; Pashley, Peter J. – Journal of Educational and Behavioral Statistics, 1999
Derives a general formula for the population parameter being estimated by the Mantel-Haenszel differential item functioning (DIF) statistics. The formula is appropriate for uniform or nonuniform DIF and can be used regardless of the form of the item response function. Shows the relationship between this parameter and item difficulty. (SLD)
Descriptors: Difficulty Level, Estimation (Mathematics), Item Bias, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Revuelta, Javier – Journal of Educational and Behavioral Statistics, 2004
This article presents a psychometric model for estimating ability and item-selection strategies in self-adapted testing. In contrast to computer adaptive testing, in self-adapted testing the examinees are allowed to select the difficulty of the items. The item-selection strategy is defined as the distribution of difficulty conditional on the…
Descriptors: Psychometrics, Adaptive Testing, Test Items, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Van den Noortgate, Wim; De Boeck, Paul – Journal of Educational and Behavioral Statistics, 2005
Although differential item functioning (DIF) theory traditionally focuses on the behavior of individual items in two (or a few) specific groups, in educational measurement contexts, it is often plausible to regard the set of items as a random sample from a broader category. This article presents logistic mixed models that can be used to model…
Descriptors: Test Bias, Item Response Theory, Educational Assessment, Mathematical Models