Publication Date
In 2025 | 1 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 14 |
Since 2016 (last 10 years) | 31 |
Since 2006 (last 20 years) | 53 |
Descriptor
Error of Measurement | 61 |
Item Response Theory | 61 |
Sample Size | 61 |
Test Items | 33 |
Models | 20 |
Test Length | 20 |
Simulation | 18 |
Monte Carlo Methods | 16 |
Test Bias | 14 |
Comparative Analysis | 13 |
Computation | 13 |
More ▼ |
Source
Author
Kelecioglu, Hülya | 3 |
Cai, Li | 2 |
Custer, Michael | 2 |
Hambleton, Ronald K. | 2 |
Lee, Won-Chan | 2 |
Lee, Yi-Hsuan | 2 |
Paek, Insu | 2 |
Stark, Stephen | 2 |
Wang, Wen-Chung | 2 |
Zhang, Jinming | 2 |
A. Corinne Huggins-Manley | 1 |
More ▼ |
Publication Type
Journal Articles | 49 |
Reports - Research | 45 |
Reports - Evaluative | 9 |
Speeches/Meeting Papers | 8 |
Dissertations/Theses -… | 4 |
Reports - Descriptive | 3 |
Guides - Non-Classroom | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Elementary Secondary Education | 2 |
Grade 8 | 2 |
Elementary Education | 1 |
Grade 4 | 1 |
Intermediate Grades | 1 |
Junior High Schools | 1 |
Kindergarten | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Longitudinal… | 1 |
Graduate Management Admission… | 1 |
National Assessment of… | 1 |
SAT (College Admission Test) | 1 |
Student Teacher Relationship… | 1 |
What Works Clearinghouse Rating
Christine E. DeMars; Paulius Satkus – Educational and Psychological Measurement, 2024
Marginal maximum likelihood, a common estimation method for item response theory models, is not inherently a Bayesian procedure. However, due to estimation difficulties, Bayesian priors are often applied to the likelihood when estimating 3PL models, especially with small samples. Little focus has been placed on choosing the priors for marginal…
Descriptors: Item Response Theory, Statistical Distributions, Error of Measurement, Bayesian Statistics
Sample Size and Item Parameter Estimation Precision When Utilizing the Masters' Partial Credit Model
Custer, Michael; Kim, Jongpil – Online Submission, 2023
This study utilizes an analysis of diminishing returns to examine the relationship between sample size and item parameter estimation precision when utilizing the Masters' Partial Credit Model for polytomous items. Item data from the standardization of the Batelle Developmental Inventory, 3rd Edition were used. Each item was scored with a…
Descriptors: Sample Size, Item Response Theory, Test Items, Computation
Ayse Bilicioglu Gunes; Bayram Bicak – International Journal of Assessment Tools in Education, 2023
The main purpose of this study is to examine the Type I error and statistical power ratios of Differential Item Functioning (DIF) techniques based on different theories under different conditions. For this purpose, a simulation study was conducted by using Mantel-Haenszel (MH), Logistic Regression (LR), Lord's [chi-squared], and Raju's Areas…
Descriptors: Test Items, Item Response Theory, Error of Measurement, Test Bias
Shaojie Wang; Won-Chan Lee; Minqiang Zhang; Lixin Yuan – Applied Measurement in Education, 2024
To reduce the impact of parameter estimation errors on IRT linking results, recent work introduced two information-weighted characteristic curve methods for dichotomous items. These two methods showed outstanding performance in both simulation and pseudo-form pseudo-group analysis. The current study expands upon the concept of information…
Descriptors: Item Response Theory, Test Format, Test Length, Error of Measurement
Fatih Orçan – International Journal of Assessment Tools in Education, 2025
Factor analysis is a statistical method to explore the relationships among observed variables and identify latent structures. It is crucial in scale development and validity analysis. Key factors affecting the accuracy of factor analysis results include the type of data, sample size, and the number of response categories. While some studies…
Descriptors: Factor Analysis, Factor Structure, Item Response Theory, Sample Size
Paek, Insu; Lin, Zhongtian; Chalmers, Robert Philip – Educational and Psychological Measurement, 2023
To reduce the chance of Heywood cases or nonconvergence in estimating the 2PL or the 3PL model in the marginal maximum likelihood with the expectation-maximization (MML-EM) estimation method, priors for the item slope parameter in the 2PL model or for the pseudo-guessing parameter in the 3PL model can be used and the marginal maximum a posteriori…
Descriptors: Models, Item Response Theory, Test Items, Intervals
Silva Diaz, John Alexander; Köhler, Carmen; Hartig, Johannes – Applied Measurement in Education, 2022
Testing item fit is central in item response theory (IRT) modeling, since a good fit is necessary to draw valid inferences from estimated model parameters. "Infit" and "outfit" fit statistics, widespread indices for detecting deviations from the Rasch model, are affected by data factors, such as sample size. Consequently, the…
Descriptors: Intervals, Item Response Theory, Item Analysis, Inferences
An Analysis of Differential Bundle Functioning in Multidimensional Tests Using the SIBTEST Procedure
Özdogan, Didem; Kelecioglu, Hülya – International Journal of Assessment Tools in Education, 2022
This study aims to analyze the differential bundle functioning in multidimensional tests with a specific purpose to detect this effect through differentiating the location of the item with DIF in the test, the correlation between the dimensions, the sample size, and the ratio of reference to focal group size. The first 10 items of the test that is…
Descriptors: Correlation, Sample Size, Test Items, Item Analysis
Wang, Shaojie; Zhang, Minqiang; Lee, Won-Chan; Huang, Feifei; Li, Zonglong; Li, Yixing; Yu, Sufang – Journal of Educational Measurement, 2022
Traditional IRT characteristic curve linking methods ignore parameter estimation errors, which may undermine the accuracy of estimated linking constants. Two new linking methods are proposed that take into account parameter estimation errors. The item- (IWCC) and test-information-weighted characteristic curve (TWCC) methods employ weighting…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Monte Carlo Methods
Peabody, Michael R. – Applied Measurement in Education, 2020
The purpose of the current article is to introduce the equating and evaluation methods used in this special issue. Although a comprehensive review of all existing models and methodologies would be impractical given the format, a brief introduction to some of the more popular models will be provided. A brief discussion of the conditions required…
Descriptors: Evaluation Methods, Equated Scores, Sample Size, Item Response Theory
Liu, Yixing; Thompson, Marilyn S. – Journal of Experimental Education, 2022
A simulation study was conducted to explore the impact of differential item functioning (DIF) on general factor difference estimation for bifactor, ordinal data. Common analysis misspecifications in which the generated bifactor data with DIF were fitted using models with equality constraints on noninvariant item parameters were compared under data…
Descriptors: Comparative Analysis, Item Analysis, Sample Size, Error of Measurement
Koçak, Duygu – Pedagogical Research, 2020
Iteration number in Monte Carlo simulation method used commonly in educational research has an effect on Item Response Theory test and item parameters. The related studies show that the number of iteration is at the discretion of the researcher. Similarly, there is no specific number suggested for the number of iteration in the related literature.…
Descriptors: Monte Carlo Methods, Item Response Theory, Educational Research, Test Items
Ziying Li; A. Corinne Huggins-Manley; Walter L. Leite; M. David Miller; Eric A. Wright – Educational and Psychological Measurement, 2022
The unstructured multiple-attempt (MA) item response data in virtual learning environments (VLEs) are often from student-selected assessment data sets, which include missing data, single-attempt responses, multiple-attempt responses, and unknown growth ability across attempts, leading to a complex and complicated scenario for using this kind of…
Descriptors: Sequential Approach, Item Response Theory, Data, Simulation
Lu, Ru; Guo, Hongwen; Dorans, Neil J. – ETS Research Report Series, 2021
Two families of analysis methods can be used for differential item functioning (DIF) analysis. One family is DIF analysis based on observed scores, such as the Mantel-Haenszel (MH) and the standardized proportion-correct metric for DIF procedures; the other is analysis based on latent ability, in which the statistic is a measure of departure from…
Descriptors: Robustness (Statistics), Weighted Scores, Test Items, Item Analysis
Fan Pan – ProQuest LLC, 2021
This dissertation informed researchers about the performance of different level-specific and target-specific model fit indices in Multilevel Latent Growth Model (MLGM) using unbalanced design and different trajectories. As the use of MLGMs is a relatively new field, this study helped further the field by informing researchers interested in using…
Descriptors: Goodness of Fit, Item Response Theory, Growth Models, Monte Carlo Methods