Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 3 |
| Since 2017 (last 10 years) | 11 |
| Since 2007 (last 20 years) | 15 |
Descriptor
| Simulation | 16 |
| Item Response Theory | 9 |
| Comparative Analysis | 8 |
| Models | 6 |
| Scores | 5 |
| Test Items | 5 |
| Bias | 3 |
| Computation | 3 |
| Correlation | 3 |
| Error of Measurement | 3 |
| Psychometrics | 3 |
| More ▼ | |
Source
| Educational Measurement:… | 16 |
Author
Publication Type
| Journal Articles | 16 |
| Reports - Research | 12 |
| Reports - Descriptive | 3 |
| Opinion Papers | 1 |
Education Level
| Higher Education | 1 |
| Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 1 |
What Works Clearinghouse Rating
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Sweeney, Sandra M.; Sinharay, Sandip; Johnson, Matthew S.; Steinhauer, Eric W. – Educational Measurement: Issues and Practice, 2022
The focus of this paper is on the empirical relationship between item difficulty and item discrimination. Two studies--an empirical investigation and a simulation study--were conducted to examine the association between item difficulty and item discrimination under classical test theory and item response theory (IRT), and the effects of the…
Descriptors: Correlation, Item Response Theory, Item Analysis, Difficulty Level
Cuhadar, Ismail; Binici, Salih – Educational Measurement: Issues and Practice, 2022
This study employs the 4-parameter logistic item response theory model to account for the unexpected incorrect responses or slipping effects observed in a large-scale Algebra 1 End-of-Course assessment, including several innovative item formats. It investigates whether modeling the misfit at the upper asymptote has any practical impact on the…
Descriptors: Item Response Theory, Measurement, Student Evaluation, Algebra
Zhan, Peida; He, Keren – Educational Measurement: Issues and Practice, 2021
In learning diagnostic assessments, the attribute hierarchy specifies a sequential network of interrelated attribute mastery processes, which makes a test blueprint consistent with the cognitive theory. One of the most important functions of attribute hierarchy is to guide or limit the developmental direction of students and then form a…
Descriptors: Longitudinal Studies, Models, Comparative Analysis, Diagnostic Tests
Leventhal, Brian; Ames, Allison – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Brian Leventhal and Dr. Allison Ames provide an overview of "Monte Carlo simulation studies" (MCSS) in "item response theory" (IRT). MCSS are utilized for a variety of reasons, one of the most compelling being that they can be used when analytic solutions are impractical or nonexistent because…
Descriptors: Item Response Theory, Monte Carlo Methods, Simulation, Test Items
Wind, Stefanie A.; Walker, A. Adrienne – Educational Measurement: Issues and Practice, 2021
Many large-scale performance assessments include score resolution procedures for resolving discrepancies in rater judgments. The goal of score resolution is conceptually similar to person fit analyses: To identify students for whom observed scores may not accurately reflect their achievement. Previously, researchers have observed that…
Descriptors: Goodness of Fit, Performance Based Assessment, Evaluators, Decision Making
Yocarini, Iris E.; Bouwmeester, Samantha; Smeets, Guus; Arends, Lidia R. – Educational Measurement: Issues and Practice, 2018
This real-data-guided simulation study systematically evaluated the decision accuracy of complex decision rules combining multiple tests within different realistic curricula. Specifically, complex decision rules combining conjunctive aspects and compensatory aspects were evaluated. A conjunctive aspect requires a minimum level of performance,…
Descriptors: Comparative Analysis, Decision Making, Accuracy, Higher Education
Feinberg, Richard A.; Rubright, Jonathan D. – Educational Measurement: Issues and Practice, 2016
Simulation studies are fundamental to psychometric discourse and play a crucial role in operational and academic research. Yet, resources for psychometricians interested in conducting simulations are scarce. This Instructional Topics in Educational Measurement Series (ITEMS) module is meant to address this deficiency by providing a comprehensive…
Descriptors: Simulation, Psychometrics, Vocabulary, Research Design
Luecht, Richard; Ackerman, Terry A. – Educational Measurement: Issues and Practice, 2018
Simulation studies are extremely common in the item response theory (IRT) research literature. This article presents a didactic discussion of "truth" and "error" in IRT-based simulation studies. We ultimately recommend that future research focus less on the simple recovery of parameters from a convenient generating IRT model,…
Descriptors: Item Response Theory, Simulation, Ethics, Error of Measurement
Madison, Matthew J. – Educational Measurement: Issues and Practice, 2019
Recent advances have enabled diagnostic classification models (DCMs) to accommodate longitudinal data. These longitudinal DCMs were developed to study how examinees change, or transition, between different attribute mastery statuses over time. This study examines using longitudinal DCMs as an approach to assessing growth and serves three purposes:…
Descriptors: Longitudinal Studies, Item Response Theory, Psychometrics, Criterion Referenced Tests
Rutkowski, David; Rutkowski, Leslie; Liaw, Yuan-Ling – Educational Measurement: Issues and Practice, 2018
Participation in international large-scale assessments has grown over time with the largest, the Programme for International Student Assessment (PISA), including more than 70 education systems that are economically and educationally diverse. To help accommodate for large achievement differences among participants, in 2009 PISA offered…
Descriptors: Educational Assessment, Foreign Countries, Achievement Tests, Secondary School Students
Wyse, Adam E. – Educational Measurement: Issues and Practice, 2017
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
Descriptors: Cutting Scores, Item Response Theory, Bayesian Statistics, Maximum Likelihood Statistics
Shang, Yi; VanIwaarden, Adam; Betebenner, Damian W. – Educational Measurement: Issues and Practice, 2015
In this study, we examined the impact of covariate measurement error (ME) on the estimation of quantile regression and student growth percentiles (SGPs), and find that SGPs tend to be overestimated among students with higher prior achievement and underestimated among those with lower prior achievement, a problem we describe as ME endogeneity in…
Descriptors: Error of Measurement, Regression (Statistics), Achievement Gains, Students
McCaffrey, Daniel F.; Castellano, Katherine E.; Lockwood, J. R. – Educational Measurement: Issues and Practice, 2015
Student growth percentiles (SGPs) express students' current observed scores as percentile ranks in the distribution of scores among students with the same prior-year scores. A common concern about SGPs at the student level, and mean or median SGPs (MGPs) at the aggregate level, is potential bias due to test measurement error (ME). Shang,…
Descriptors: Error of Measurement, Accuracy, Achievement Gains, Students
Monroe, Scott; Cai, Li – Educational Measurement: Issues and Practice, 2015
Student growth percentiles (SGPs, Betebenner, 2009) are used to locate a student's current score in a conditional distribution based on the student's past scores. Currently, following Betebenner (2009), quantile regression (QR) is most often used operationally to estimate the SGPs. Alternatively, multidimensional item response theory (MIRT) may…
Descriptors: Item Response Theory, Reliability, Growth Models, Computation
Previous Page | Next Page ยป
Pages: 1 | 2
Peer reviewed
Direct link
