NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Carmen Köhler; Lale Khorramdel; Artur Pokropek; Johannes Hartig – Journal of Educational Measurement, 2024
For assessment scales applied to different groups (e.g., students from different states; patients in different countries), multigroup differential item functioning (MG-DIF) needs to be evaluated in order to ensure that respondents with the same trait level but from different groups have equal response probabilities on a particular item. The…
Descriptors: Measures (Individuals), Test Bias, Models, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ahmad Suryadi; Sahal Fawaiz; Eka Kurniati; Ahmad Swandi – Journal of Pedagogical Research, 2024
The waning interest of students in science became a global concern. The purpose of this research was to translate, adapt, and validate the My Attitude toward Science [MATS] questionnaire instrument, which was used to measure students' attitudes toward science in the Indonesian context. We also investigated the items that contributed to gender and…
Descriptors: Foreign Countries, Science Education, Achievement Tests, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Kaplan, David; Su, Dan – Large-scale Assessments in Education, 2018
Background: This paper extends a recent study by Kaplan and Su ("J Educ Behav Stat" 41: 51-80, 2016) examining the problem of matrix sampling of context questionnaire scales with respect to the generation of plausible values of cognitive outcomes in large-scale assessments. Methods: Following Weirich et al. ("Nested multiple…
Descriptors: Questionnaires, Measurement, Measurement Techniques, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Rutkowski, Leslie; Rutkowski, David; Zhou, Yan – International Journal of Testing, 2016
Using an empirically-based simulation study, we show that typically used methods of choosing an item calibration sample have significant impacts on achievement bias and system rankings. We examine whether recent PISA accommodations, especially for lower performing participants, can mitigate some of this bias. Our findings indicate that standard…
Descriptors: Simulation, International Programs, Adolescents, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole – Journal of Educational Measurement, 2016
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Descriptors: Comparative Analysis, Measurement, Test Bias, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fensham, Peter J.; Cumming, J. Joy – Education Sciences, 2013
Assessment of learning plays a dominant role in formal education in the forms of determining features of curriculum that are emphasized, pedagogic methods that teachers use with their students, and parents' and employers' understanding of how well students have performed. A common perception is that fair assessment applies the same mode of…
Descriptors: Student Evaluation, Evaluation Methods, Science Achievement, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Mapuranga, Raymond – International Journal of Testing, 2009
Differential item functioning (DIF) analysis is a statistical technique used for ensuring the equity and fairness of educational assessments. This study formulates a new DIF analysis method using the information similarity index (ISI). ISI compares item information functions when data fits the Rasch model. Through simulations and an international…
Descriptors: Test Bias, Evaluation Methods, Test Items, Educational Assessment