NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
Program for International…1
What Works Clearinghouse Rating
Showing 1 to 15 of 46 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cole, Ki; Paek, Insu – Measurement: Interdisciplinary Research and Perspectives, 2022
Statistical Analysis Software (SAS) is a widely used tool for data management analysis across a variety of fields. The procedure for item response theory (PROC IRT) is one to perform unidimensional and multidimensional item response theory (IRT) analysis for dichotomous and polytomous data. This review provides a summary of the features of PROC…
Descriptors: Item Response Theory, Computer Software, Item Analysis, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Metsämuuronen, Jari – Practical Assessment, Research & Evaluation, 2022
This article discusses visual techniques for detecting test items that would be optimal to be selected to the final compilation on the one hand and, on the other hand, to out-select those items that would lower the quality of the compilation. Some classic visual tools are discussed, first, in a practical manner in diagnosing the logical,…
Descriptors: Test Items, Item Analysis, Item Response Theory, Cutting Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Fuchimoto, Kazuma; Ishii, Takatoshi; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2022
Educational assessments often require uniform test forms, for which each test form has equivalent measurement accuracy but with a different set of items. For uniform test assembly, an important issue is the increase of the number of assembled uniform tests. Although many automatic uniform test assembly methods exist, the maximum clique algorithm…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Matta, Tyler H.; Rutkowski, Leslie; Rutkowski, David; Liaw, Yuan-Ling – Large-scale Assessments in Education, 2018
This article provides an overview of the R package lsasim, designed to facilitate the generation of data that mimics a large scale assessment context. The package features functions for simulating achievement data according to a number of common IRT models with known parameters. A clear advantage of lsasim over other simulation software is that…
Descriptors: Measurement, Data, Simulation, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Schumacker, Randall – Measurement: Interdisciplinary Research and Perspectives, 2019
The R software provides packages and functions that provide data analysis in classical true score, generalizability theory, item response theory, and Rasch measurement theories. A brief list of notable articles in each measurement theory and the first measurement journals is followed by a list of R psychometric software packages. Each psychometric…
Descriptors: Psychometrics, Computer Software, Measurement, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Sunbok; Choi, Youn-Jeng; Cohen, Allan S. – International Journal of Assessment Tools in Education, 2018
A simulation study is a useful tool in examining how validly item response theory (IRT) models can be applied in various settings. Typically, a large number of replications are required to obtain the desired precision. However, many standard software packages in IRT, such as MULTILOG and BILOG, are not well suited for a simulation study requiring…
Descriptors: Item Response Theory, Simulation, Replication (Evaluation), Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Zheng, Xiaying; Yang, Ji Seung – Measurement: Interdisciplinary Research and Perspectives, 2021
The purpose of this paper is to briefly introduce two most common applications of multiple group item response theory (IRT) models, namely detecting differential item functioning (DIF) analysis and nonequivalent group score linking with a simultaneous calibration. We illustrate how to conduct those analyses using the "Stata" item…
Descriptors: Item Response Theory, Test Bias, Computer Software, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Levy, Roy – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Roy Levy describes Bayesian approaches to psychometric modeling. He discusses how Bayesian inference is a mechanism for reasoning in a probability-modeling framework and is well-suited to core problems in educational measurement: reasoning from student performances on an assessment to make inferences about their…
Descriptors: Bayesian Statistics, Psychometrics, Item Response Theory, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Sheng, Yanyan – Measurement: Interdisciplinary Research and Perspectives, 2019
Classical approach to test theory has been the foundation for educational and psychological measurement for over 90 years. This approach concerns with measurement error and hence test reliability, which in part relies on individual test items. The CTT package, developed in light of this, provides functions for test- and item-level analyses of…
Descriptors: Item Response Theory, Test Reliability, Item Analysis, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Jinnie – Journal of Educational and Behavioral Statistics, 2017
This article reviews PROC IRT, which was added to Statistical Analysis Software in 2014. We provide an introductory overview of a free version of SAS, describe what PROC IRT offers for item response theory (IRT) analysis and how one can use PROC IRT, and discuss how other SAS macros and procedures may compensate the IRT functionalities of PROC IRT.
Descriptors: Item Response Theory, Computer Software, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Cui, Mengyao; Öztürk Gübes, Nese; Yang, Yanyun – Educational and Psychological Measurement, 2018
The purpose of this article is twofold. The first is to provide evaluative information on the recovery of model parameters and their standard errors for the two-parameter item response theory (IRT) model using different estimation methods by Mplus. The second is to provide easily accessible information for practitioners, instructors, and students…
Descriptors: Item Response Theory, Computation, Factor Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Diao, Hongyu; Sireci, Stephen G. – Journal of Applied Testing Technology, 2018
Whenever classification decisions are made on educational tests, such as pass/fail, or basic, proficient, or advanced, the consistency and accuracy of those decisions should be estimated and reported. Methods for estimating the reliability of classification decisions made on the basis of educational tests are well-established (e.g., Rudner, 2001;…
Descriptors: Classification, Item Response Theory, Accuracy, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Youn-Jeng; Asilkalkan, Abdullah – Measurement: Interdisciplinary Research and Perspectives, 2019
About 45 R packages to analyze data using item response theory (IRT) have been developed over the last decade. This article introduces these 45 R packages with their descriptions and features. It also describes possible advanced IRT models using R packages, as well as dichotomous and polytomous IRT models, and R packages that contain applications…
Descriptors: Item Response Theory, Data Analysis, Computer Software, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Jue; Engelhard, George, Jr. – Educational Measurement: Issues and Practice, 2019
In this digital ITEMS module, Dr. Jue Wang and Dr. George Engelhard Jr. describe the Rasch measurement framework for the construction and evaluation of new measures and scales. From a theoretical perspective, they discuss the historical and philosophical perspectives on measurement with a focus on Rasch's concept of specific objectivity and…
Descriptors: Item Response Theory, Evaluation Methods, Measurement, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2018
This article outlines a procedure for examining the degree to which a common factor may be dominating additional factors in a multicomponent measuring instrument consisting of binary items. The procedure rests on an application of the latent variable modeling methodology and accounts for the discrete nature of the manifest indicators. The method…
Descriptors: Measurement Techniques, Factor Analysis, Item Response Theory, Likert Scales
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4