NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dadey, Nathan; Lyons, Susan; DePascale, Charles – Applied Measurement in Education, 2018
Evidence of comparability is generally needed whenever there are variations in the conditions of an assessment administration, including variations introduced by the administration of an assessment on multiple digital devices (e.g., tablet, laptop, desktop). This article is meant to provide a comprehensive examination of issues relevant to the…
Descriptors: Evaluation Methods, Computer Assisted Testing, Educational Technology, Technology Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan; Ban, Jae-Chun – Applied Measurement in Education, 2010
Various applications of item response theory often require linking to achieve a common scale for item parameter estimates obtained from different groups. This article used a simulation to examine the relative performance of four different item response theory (IRT) linking procedures in a random groups equating design: concurrent calibration with…
Descriptors: Item Response Theory, Simulation, Comparative Analysis, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Stage, Alan Kirk; Monahan, Patrick – Applied Measurement in Education, 2008
A primary assumption underlying several of the common methods for modeling item response data is unidimensionality, that is, test items tap into only one latent trait. This assumption can be assessed several ways, using nonlinear factor analysis and DETECT, a method based on the item conditional covariances. When multidimensionality is identified,…
Descriptors: Test Items, Factor Analysis, Item Response Theory, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Briggs, Derek C. – Applied Measurement in Education, 2008
This article illustrates the use of an explanatory item response modeling (EIRM) approach in the context of measuring group differences in science achievement. The distinction between item response models and EIRMs, recently elaborated by De Boeck and Wilson (2004), is presented within the statistical framework of generalized linear mixed models.…
Descriptors: Science Achievement, Science Tests, Measurement, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D. – Applied Measurement in Education, 2006
This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…
Descriptors: Bayesian Statistics, Adaptive Testing, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Clauser, Brian; And Others – Applied Measurement in Education, 1993
The usefulness of a two-step version of the Mantel Haenszel procedure for distinguishing between differential item functioning (DIF) and item impact was studied by comparing the single-step and two-step procedures using a simulated data set. Results show changes in the identification rate for the two-step methods. (SLD)
Descriptors: Comparative Analysis, Evaluation Methods, Identification, Item Bias
Peer reviewed Peer reviewed
Martinez, Michael E. – Applied Measurement in Education, 1993
Figural response (FR) items in architecture were compared with multiple-choice (MC) counterparts for their ability to predict architectural problem-solving proficiency of 33 practicing architects, 34 architecture interns, and 53 architecture students. Although both FR and MC predicted verbal design problem solving, only FR scores predicted…
Descriptors: Architects, Architectural Drafting, College Students, Comparative Analysis
Peer reviewed Peer reviewed
Plake, Barbara S. – Applied Measurement in Education, 1995
This article provides a framework for the rest of the articles in this special issue comparing the utility of three standard-setting methods with complex performance assessments. The context of the standard setting study is described, and the methods are outlined. (SLD)
Descriptors: Comparative Analysis, Criteria, Decision Making, Educational Assessment