NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
What Works Clearinghouse Rating
Showing 1 to 15 of 63 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Marjolein Muskens; Willem E. Frankenhuis; Lex Borghans – npj Science of Learning, 2024
In many countries, standardized math tests are important for achieving academic success. Here, we examine whether content of items, the story that explains a mathematical question, biases performance of low-SES students. In a large-scale cohort study of Trends in International Mathematics and Science Studies (TIMSS)--including data from 58…
Descriptors: Mathematics Tests, Standardized Tests, Test Items, Low Income Students
Peer reviewed Peer reviewed
Direct linkDirect link
Yi-Hsin Chen – Applied Measurement in Education, 2024
This study aims to apply the differential item functioning (DIF) technique with the deterministic inputs, noisy "and" gate (DINA) model to validate the mathematics construct and diagnostic attribute profiles across American and Singaporean students. Even with the same ability level, every single item is expected to show uniform DIF…
Descriptors: Foreign Countries, Achievement Tests, Elementary Secondary Education, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Witmer, Sara E.; Roschmann, Sarina – Measurement and Evaluation in Counseling and Development, 2020
It is critical to examine whether test accommodations function as intended in removing construct-irrelevant variance. The measurement comparability of a math test for students with emotional impairments and those without disabilities was examined. Results indicated the presence of limited differential item functioning (DIF) regardless of…
Descriptors: Testing Accommodations, Mathematics Tests, Emotional Disturbances, Students with Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Witmer, Sara E.; Roschmann, Sarina – Education and Training in Autism and Developmental Disabilities, 2020
Although it is critical for students with autism to be included in large-scale assessment and accountability systems, it is not clear how to best measure their underlying academic skills and knowledge. Additional empirically-supported guidance is necessary to assist school teams that need to make decisions about how to best include students with…
Descriptors: Testing Accommodations, Autism, Pervasive Developmental Disorders, Students with Disabilities
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Paladino, Margaret – Journal for Leadership and Instruction, 2020
The opt-out movement, a grassroots coalition of opposition to high-stakes tests that are used to sort students, evaluate teachers, and rank schools, has the largest participation on Long Island, New York, where approximately 50% of the eligible students in grades three to eight opted out of the English Language Arts (ELA) and Mathematics tests in…
Descriptors: High Stakes Tests, Parent Attitudes, Racial Differences, Ethnicity
McLoud, Rachael – ProQuest LLC, 2019
An increasing number of parents are opting-out their children from high-stakes. Accountability systems in education have used students' test scores to measure student learning, teacher effectiveness, and school district performance. Students who are opted-out of high-stakes tests are not being evaluated by the state tests, making their level of…
Descriptors: Evaluation, High Stakes Tests, Parent Attitudes, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Yi-Jui Iva; Wilson, Mark; Irey, Robin C.; Requa, Mary K. – Language Testing, 2020
Orthographic processing -- the ability to perceive, access, differentiate, and manipulate orthographic knowledge -- is essential when learning to recognize words. Despite its critical importance in literacy acquisition, the field lacks a tool to assess this essential cognitive ability. The goal of this study was to design a computer-based…
Descriptors: Orthographic Symbols, Spelling, Word Recognition, Reading Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2017
This study defines subpopulation item parameter drift (SIPD) as a change in item parameters over time that is dependent on subpopulations of examinees, and hypothesizes that the presence of SIPD in anchor items is associated with bias and/or lack of invariance in three psychometric outcomes. Results show that SIPD in anchor items is associated…
Descriptors: Psychometrics, Test Items, Item Response Theory, Hypothesis Testing
Li, Sylvia; Meyer, Patrick – NWEA, 2019
This simulation study examines the measurement precision, item exposure rates, and the depth of the MAP® Growth™ item pools under various grade-level restrictions. Unlike most summative assessments, MAP Growth allows examinees to see items from any grade level, regardless of the examinee's actual grade level. It does not limit the test to items…
Descriptors: Achievement Tests, Item Banks, Test Items, Instructional Program Divisions
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Carvajal-Espinoza, Jorge; Welch, Greg W. – Online Submission, 2016
When tests are translated into one or more languages, the question of the equivalence of items across language forms arises. This equivalence can be assessed at the scale level by means of a multiple group confirmatory factor analysis (CFA) in the context of structural equation modeling. This study examined the measurement equivalence of a Spanish…
Descriptors: Translation, Spanish, English, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Oliveri, María Elena; Ercikan, Kadriye; Zumbo, Bruno D.; Lawless, René – International Journal of Testing, 2014
In this study, we contrast results from two differential item functioning (DIF) approaches (manifest and latent class) by the number of items and sources of items identified as DIF using data from an international reading assessment. The latter approach yielded three latent classes, presenting evidence of heterogeneity in examinee response…
Descriptors: Test Bias, Comparative Analysis, Reading Tests, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Hongli; Qin, Qi; Lei, Pui-Wa – Educational Assessment, 2017
In recent years, students' test scores have been used to evaluate teachers' performance. The assumption underlying this practice is that students' test performance reflects teachers' instruction. However, this assumption is generally not empirically tested. In this study, we examine the effect of teachers' instruction on test performance at the…
Descriptors: Achievement Tests, Foreign Countries, Elementary Secondary Education, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes; Hernández Finch, Maria E.; French, Brian F. – International Journal of Testing, 2016
Differential item functioning (DIF) assessment is key in score validation. When DIF is present scores may not accurately reflect the construct of interest for some groups of examinees, leading to incorrect conclusions from the scores. Given rising immigration, and the increased reliance of educational policymakers on cross-national assessments…
Descriptors: Test Bias, Scores, Native Language, Language Usage
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Youn-Jeng; Alexeev, Natalia; Cohen, Allan S. – International Journal of Testing, 2015
The purpose of this study was to explore what may be contributing to differences in performance in mathematics on the Trends in International Mathematics and Science Study 2007. This was done by using a mixture item response theory modeling approach to first detect latent classes in the data and then to examine differences in performance on items…
Descriptors: Test Bias, Mathematics Achievement, Mathematics Tests, Item Response Theory
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5