NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 26 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Robitzsch, Alexander; Lüdtke, Oliver – Journal of Educational and Behavioral Statistics, 2022
One of the primary goals of international large-scale assessments in education is the comparison of country means in student achievement. This article introduces a framework for discussing differential item functioning (DIF) for such mean comparisons. We compare three different linking methods: concurrent scaling based on full invariance,…
Descriptors: Test Bias, International Assessment, Scaling, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Soysal, Sumeyra; Yilmaz Kogar, Esin – International Journal of Assessment Tools in Education, 2021
In this study, whether item position effects lead to DIF in the condition where different test booklets are used was investigated. To do this the methods of Lord's chi-square and Raju's unsigned area with the 3PL model under with and without item purification were used. When the performance of the methods was compared, it was revealed that…
Descriptors: Item Response Theory, Test Bias, Test Items, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Uyar, Seyma – Eurasian Journal of Educational Research, 2020
Purpose: This study aimed to compare the performance of latent class differential item functioning (DIF) approach and IRT based DIF methods using manifest grouping. With this study, it was thought to draw attention to carry out latent class DIF studies in Turkey. The purpose of this study was to examine DIF in PISA 2015 science data set. Research…
Descriptors: Item Response Theory, Foreign Countries, Cross Cultural Studies, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Sachse, Karoline A.; Haag, Nicole – Applied Measurement in Education, 2017
Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…
Descriptors: Error of Measurement, Test Bias, International Assessment, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Ercikan, Kadriye; Guo, Hongwen; He, Qiwei – Educational Assessment, 2020
Comparing group is one of the key uses of large-scale assessment results, which are used to gain insights to inform policy and practice and to examine the comparability of scores and score meaning. Such comparisons typically focus on examinees' final answers and responses to test questions, ignoring response process differences groups may engage…
Descriptors: Data Use, Responses, Comparative Analysis, Test Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dogan, Nuri; Hambleton, Ronald K.; Yurtcu, Meltem; Yavuz, Sinan – Cypriot Journal of Educational Sciences, 2018
Validity is one of the psychometric properties of the achievement tests. To determine the validity, one of the examination is item bias studies, which are based on differential item functioning (DIF) analyses and field experts' opinion. In this study, field experts were asked to estimate the DIF levels of the items to compare the estimations…
Descriptors: Test Bias, Comparative Analysis, Predictor Variables, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ivanova, Alina; Kardanova, Elena; Merrell, Christine; Tymms, Peter; Hawker, David – Assessment in Education: Principles, Policy & Practice, 2018
Is it possible to compare the results in assessments of mathematics across countries with different curricula, traditions and age of starting school? As part of the iPIPS project, a Russian version of the iPIPS baseline assessment was developed and trial data were available from about 300 Russian children at the start and end of their first year…
Descriptors: Mathematics Instruction, Foreign Countries, Mathematics Tests, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ozdemir, Burhanettin – International Journal of Progressive Education, 2017
The purpose of this study is to equate Trends in International Mathematics and Science Study (TIMSS) mathematics subtest scores obtained from TIMSS 2011 to scores obtained from TIMSS 2007 form with different nonlinear observed score equating methods under Non-Equivalent Anchor Test (NEAT) design where common items are used to link two or more test…
Descriptors: Achievement Tests, Elementary Secondary Education, Foreign Countries, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Jones, Ian; Wheadon, Chris; Humphries, Sara; Inglis, Matthew – British Educational Research Journal, 2016
Advanced-level (A-level) mathematics is a high-profile qualification taken by many school leavers in England, Wales, Northern Ireland and around the world as preparation for university study. Concern has been expressed in these countries that standards in A-level mathematics have declined over time, and that school leavers enter university or the…
Descriptors: Foreign Countries, College Mathematics, Secondary School Mathematics, Academic Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Arikan, Serkan; van de Vijver, Fons J. R.; Yagmur, Kutlay – Educational Assessment, Evaluation and Accountability, 2017
Lower reading and mathematics performance of Turkish immigrant students as compared to mainstream European students could reflect differential learning outcomes, differential socioeconomic backgrounds of the groups, differential mainstream language proficiency, and/or test bias. Using PISA reading and mathematics scores of these groups, we…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Rawls, Anita; Zhang, Xiuyuan; Hendrickson, Amy – College Board, 2016
The classification of test-takers into ethnic and racial groups ensures individuals and groups, identified in Title VI and VII of the Civil Rights Act of 1964 and the 14th Amendment to the Constitution of the United States, are protected from adverse treatment (Camilli, 2006). The United States Office of Management and Budget (OMB) suggests that…
Descriptors: Racial Identification, Ethnic Groups, Multiracial Persons, Test Bias
Liu, Junhui; Brown, Terran; Chen, Jianshen; Ali, Usama; Hou, Likun; Costanzo, Kate – Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium working to develop next-generation assessments that more accurately, compared to previous assessments, measure student progress toward college and career readiness. The PARCC assessments include both English Language Arts/Literacy (ELA/L) and…
Descriptors: Testing, Achievement Tests, Test Items, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole – Journal of Educational Measurement, 2016
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Descriptors: Comparative Analysis, Measurement, Test Bias, Simulation
Zoanetti, Nathan; Les, Magdalena; Leigh-Lancaster, David – Mathematics Education Research Group of Australasia, 2014
From 2011-2013 the VCAA conducted a trial aligning the use of computers in curriculum, pedagogy and assessment culminating in a group of 62 volunteer students sitting their end of Year 12 technology-active Mathematical Methods (CAS) Examination 2 as a computer-based examination. This paper reports on statistical modelling undertaken to compare the…
Descriptors: Computer Assisted Testing, Comparative Analysis, Mathematical Concepts, Mathematics Tests
Previous Page | Next Page »
Pages: 1  |  2