NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 61 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dubravka Svetina Valdivia; Shenghai Dai – Journal of Experimental Education, 2024
Applications of polytomous IRT models in applied fields (e.g., health, education, psychology) are abound. However, little is known about the impact of the number of categories and sample size requirements for precise parameter recovery. In a simulation study, we investigated the impact of the number of response categories and required sample size…
Descriptors: Item Response Theory, Sample Size, Models, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Guo, Wenjing; Choi, Youn-Jeng – Educational and Psychological Measurement, 2023
Determining the number of dimensions is extremely important in applying item response theory (IRT) models to data. Traditional and revised parallel analyses have been proposed within the factor analysis framework, and both have shown some promise in assessing dimensionality. However, their performance in the IRT framework has not been…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Guidelines
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Lin, Zhongtian; Chalmers, Robert Philip – Educational and Psychological Measurement, 2023
To reduce the chance of Heywood cases or nonconvergence in estimating the 2PL or the 3PL model in the marginal maximum likelihood with the expectation-maximization (MML-EM) estimation method, priors for the item slope parameter in the 2PL model or for the pseudo-guessing parameter in the 3PL model can be used and the marginal maximum a posteriori…
Descriptors: Models, Item Response Theory, Test Items, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Matayoshi, Jeffrey; Cosyn, Eric; Uzun, Hasan – International Journal of Artificial Intelligence in Education, 2021
Many recent studies have looked at the viability of applying recurrent neural networks (RNNs) to educational data. In most cases, this is done by comparing their performance to existing models in the artificial intelligence in education (AIED) and educational data mining (EDM) fields. While there is increasing evidence that, in many situations,…
Descriptors: Artificial Intelligence, Data Analysis, Student Evaluation, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kalkan, Ömür Kaya – Measurement: Interdisciplinary Research and Perspectives, 2022
The four-parameter logistic (4PL) Item Response Theory (IRT) model has recently been reconsidered in the literature due to the advances in the statistical modeling software and the recent developments in the estimation of the 4PL IRT model parameters. The current simulation study evaluated the performance of expectation-maximization (EM),…
Descriptors: Comparative Analysis, Sample Size, Test Length, Algorithms
Haimiao Yuan – ProQuest LLC, 2022
The application of diagnostic classification models (DCMs) in the field of educational measurement is getting more attention in recent years. To make a valid inference from the model, it is important to ensure that the model fits the data. The purpose of the present study was to investigate the performance of the limited information…
Descriptors: Goodness of Fit, Educational Assessment, Educational Diagnosis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Fu, Yanyan; Strachan, Tyler; Ip, Edward H.; Willse, John T.; Chen, Shyh-Huei; Ackerman, Terry – International Journal of Testing, 2020
This research examined correlation estimates between latent abilities when using the two-dimensional and three-dimensional compensatory and noncompensatory item response theory models. Simulation study results showed that the recovery of the latent correlation was best when the test contained 100% of simple structure items for all models and…
Descriptors: Item Response Theory, Models, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Su, Shiyang; Wang, Chun; Weiss, David J. – Educational and Psychological Measurement, 2021
S-X[superscript 2] is a popular item fit index that is available in commercial software packages such as "flex"MIRT. However, no research has systematically examined the performance of S-X[superscript 2] for detecting item misfit within the context of the multidimensional graded response model (MGRM). The primary goal of this study was…
Descriptors: Statistics, Goodness of Fit, Test Items, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ellis, Jules L. – Educational and Psychological Measurement, 2021
This study develops a theoretical model for the costs of an exam as a function of its duration. Two kind of costs are distinguished: (1) the costs of measurement errors and (2) the costs of the measurement. Both costs are expressed in time of the student. Based on a classical test theory model, enriched with assumptions on the context, the costs…
Descriptors: Test Length, Models, Error of Measurement, Measurement
Ziying Li; A. Corinne Huggins-Manley; Walter L. Leite; M. David Miller; Eric A. Wright – Educational and Psychological Measurement, 2022
The unstructured multiple-attempt (MA) item response data in virtual learning environments (VLEs) are often from student-selected assessment data sets, which include missing data, single-attempt responses, multiple-attempt responses, and unknown growth ability across attempts, leading to a complex and complicated scenario for using this kind of…
Descriptors: Sequential Approach, Item Response Theory, Data, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Zhou, Sherry; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2020
The semi-generalized partial credit model (Semi-GPCM) has been proposed as a unidimensional modeling method for handling not applicable scale responses and neutral scale responses, and it has been suggested that the model may be of use in handling missing data in scale items. The purpose of this study is to evaluate the ability of the…
Descriptors: Models, Statistical Analysis, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baris Pekmezci, Fulya; Gulleroglu, H. Deniz – Eurasian Journal of Educational Research, 2019
Purpose: This study aims to investigate the orthogonality assumption, which restricts the use of Bifactor item response theory under different conditions. Method: Data of the study have been obtained in accordance with the Bifactor model. It has been produced in accordance with two different models (Model 1 and Model 2) in a simulated way.…
Descriptors: Item Response Theory, Accuracy, Item Analysis, Correlation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Damrongpanit, Suntonrapot – Universal Journal of Educational Research, 2019
The purposes of this study were to test the structural validity and to test the parameters invariance of the self-discipline measurement model for good student citizenship among the models, using the data from the 1,047 complete questionnaires and the reducing length questionnaires with multiple matrix sampling technique. The sample size of this…
Descriptors: Factor Structure, Questionnaires, Test Length, Citizenship
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fu, Jianbin; Feng, Yuling – ETS Research Report Series, 2018
In this study, we propose aggregating test scores with unidimensional within-test structure and multidimensional across-test structure based on a 2-level, 1-factor model. In particular, we compare 6 score aggregation methods: average of standardized test raw scores (M1), regression factor score estimate of the 1-factor model based on the…
Descriptors: Comparative Analysis, Scores, Correlation, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Soo; Bulut, Okan; Suh, Youngsuk – Educational and Psychological Measurement, 2017
A number of studies have found multiple indicators multiple causes (MIMIC) models to be an effective tool in detecting uniform differential item functioning (DIF) for individual items and item bundles. A recently developed MIMIC-interaction model is capable of detecting both uniform and nonuniform DIF in the unidimensional item response theory…
Descriptors: Test Bias, Test Items, Models, Item Response Theory
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5