NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 106 results Save | Export
Uk Hyun Cho – ProQuest LLC, 2024
The present study investigates the influence of multidimensionality on linking and equating in a unidimensional IRT. Two hypothetical multidimensional scenarios are explored under a nonequivalent group common-item equating design. The first scenario examines test forms designed to measure multiple constructs, while the second scenario examines a…
Descriptors: Item Response Theory, Classification, Correlation, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Seyma Erbay Mermer – Pegem Journal of Education and Instruction, 2024
This study aims to compare item and student parameters of dichotomously scored multidimensional constructs estimated based on unidimensional and multidimensional Item Response Theory (IRT) under different conditions of sample size, interdimensional correlation and number of dimensions. This research, conducted with simulations, is of a basic…
Descriptors: Item Response Theory, Correlation, Error of Measurement, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Xiaowen Liu – International Journal of Testing, 2024
Differential item functioning (DIF) often arises from multiple sources. Within the context of multidimensional item response theory, this study examined DIF items with varying secondary dimensions using the three DIF methods: SIBTEST, Mantel-Haenszel, and logistic regression. The effect of the number of secondary dimensions on DIF detection rates…
Descriptors: Item Analysis, Test Items, Item Response Theory, Correlation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Celen, Umit; Aybek, Eren Can – International Journal of Assessment Tools in Education, 2022
Item analysis is performed by developers as an integral part of the scale development process. Thus, items are excluded from the scale depending on the item analysis prior to the factor analysis. Existing item discrimination indices are calculated based on correlation, yet items with different response patterns are likely to have a similar item…
Descriptors: Likert Scales, Factor Analysis, Item Analysis, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Sedat Sen; Allan S. Cohen – Educational and Psychological Measurement, 2024
A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's…
Descriptors: Goodness of Fit, Item Response Theory, Sample Size, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Polat, Murat; Turhan, Nihan S.; Toraman, Cetin – Pegem Journal of Education and Instruction, 2022
Testing English writing skills could be multi-dimensional; thus, the study aimed to compare students' writing scores calculated according to Classical Test Theory (CTT) and Multi-Facet Rasch Model (MFRM). The research was carried out in 2019 with 100 university students studying at a foreign language preparatory class and four experienced…
Descriptors: Comparative Analysis, Test Theory, Item Response Theory, Student Evaluation
Yoo Jeong Jang – ProQuest LLC, 2022
Despite the increasing demand for diagnostic information, observed subscores have been often reported to lack adequate psychometric qualities such as reliability, distinctiveness, and validity. Therefore, several statistical techniques based on CTT and IRT frameworks have been proposed to improve the quality of subscores. More recently, DCM has…
Descriptors: Classification, Accuracy, Item Response Theory, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Lúcio, Patrícia Silva; Vandekerckhove, Joachim; Polanczyk, Guilherme V.; Cogo-Moreira, Hugo – Journal of Psychoeducational Assessment, 2021
The present study compares the fit of two- and three-parameter logistic (2PL and 3PL) models of item response theory in the performance of preschool children on the Raven's Colored Progressive Matrices. The test of Raven is widely used for evaluating nonverbal intelligence of factor g. Studies comparing models with real data are scarce on the…
Descriptors: Guessing (Tests), Item Response Theory, Test Validity, Preschool Children
Peer reviewed Peer reviewed
Direct linkDirect link
Pavel Chernyavskiy; Traci S. Kutaka; Carson Keeter; Julie Sarama; Douglas Clements – Grantee Submission, 2024
When researchers code behavior that is undetectable or falls outside of the validated ordinal scale, the resultant outcomes often suffer from informative missingness. Incorrect analysis of such data can lead to biased arguments around efficacy and effectiveness in the context of experimental and intervention research. Here, we detail a new…
Descriptors: Bayesian Statistics, Mathematics Instruction, Learning Trajectories, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
PaaBen, Benjamin; Dywel, Malwina; Fleckenstein, Melanie; Pinkwart, Niels – International Educational Data Mining Society, 2022
Item response theory (IRT) is a popular method to infer student abilities and item difficulties from observed test responses. However, IRT struggles with two challenges: How to map items to skills if multiple skills are present? And how to infer the ability of new students that have not been part of the training data? Inspired by recent advances…
Descriptors: Item Response Theory, Test Items, Item Analysis, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Stewart, John; Drury, Byron; Wells, James; Adair, Aaron; Henderson, Rachel; Ma, Yunfei; Perez-Lemonche, Ángel; Pritchard, David – Physical Review Physics Education Research, 2021
This study reports an analysis of the Force Concept Inventory (FCI) using item response curves (IRC)--the fraction of students selecting each response to an item as a function of their total score. Three large samples (N = 9606, 4360, and 1439) of calculus-based physics students were analyzed. These were drawn from three land-grant institutions…
Descriptors: Physics, Science Instruction, Scientific Concepts, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Ci; Xu, XiaoShu; Zhang, Yunfeng – Language Testing in Asia, 2023
This study presents the validation process of a listening test based on a communicative language test proposed by Bachman (Fundamental considerations in language testing, 1990). It was administered to third-grade high school students by the sixteen Korean Provincial Offices of Education for Curriculum and Evaluation in September 2012 to assess…
Descriptors: Language Tests, Second Language Learning, Second Language Instruction, Listening Comprehension Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fu, Jianbin; Feng, Yuling – ETS Research Report Series, 2018
In this study, we propose aggregating test scores with unidimensional within-test structure and multidimensional across-test structure based on a 2-level, 1-factor model. In particular, we compare 6 score aggregation methods: average of standardized test raw scores (M1), regression factor score estimate of the 1-factor model based on the…
Descriptors: Comparative Analysis, Scores, Correlation, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Malec, Wojciech; Krzeminska-Adamek, Malgorzata – Practical Assessment, Research & Evaluation, 2020
The main objective of the article is to compare several methods of evaluating multiple-choice options through classical item analysis. The methods subjected to examination include the tabulation of choice distribution, the interpretation of trace lines, the point-biserial correlation, the categorical analysis of trace lines, and the investigation…
Descriptors: Comparative Analysis, Evaluation Methods, Multiple Choice Tests, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Bazaldua, Diego A. Luna; Lee, Young-Sun; Keller, Bryan; Fellers, Lauren – Asia Pacific Education Review, 2017
The performance of various classical test theory (CTT) item discrimination estimators has been compared in the literature using both empirical and simulated data, resulting in mixed results regarding the preference of some discrimination estimators over others. This study analyzes the performance of various item discrimination estimators in CTT:…
Descriptors: Test Items, Monte Carlo Methods, Item Response Theory, Correlation
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8