NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
What Works Clearinghouse Rating
Showing 1 to 15 of 49 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Russell, Michael; Kaplan, Larry – Practical Assessment, Research & Evaluation, 2021
Differential Item Functioning (DIF) is commonly employed to examine measurement bias of test scores. Current approaches to DIF compare item functioning separately for select demographic identities such as gender, racial stratification, and economic status. Examining potential item bias fails to recognize and capture the intersecting configurations…
Descriptors: Test Bias, Test Items, Demography, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Michael; Szendey, Olivia; Kaplan, Larry – Educational Assessment, 2021
Differential Item Function (DIF) analysis is commonly employed to examine potential bias produced by a test item. Since its introduction DIF analyses have focused on potential bias related to broad categories of oppression, including gender, racial stratification, economic class, and ableness. More recently, efforts to examine the effects of…
Descriptors: Test Bias, Achievement Tests, Individual Characteristics, Disadvantaged
Peer reviewed Peer reviewed
Direct linkDirect link
Witmer, Sara E.; Roschmann, Sarina – Measurement and Evaluation in Counseling and Development, 2020
It is critical to examine whether test accommodations function as intended in removing construct-irrelevant variance. The measurement comparability of a math test for students with emotional impairments and those without disabilities was examined. Results indicated the presence of limited differential item functioning (DIF) regardless of…
Descriptors: Testing Accommodations, Mathematics Tests, Emotional Disturbances, Students with Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Michael; Szendey, Olivia; Li, Zhushan – Educational Assessment, 2022
Recent research provides evidence that an intersectional approach to defining reference and focal groups results in a higher percentage of comparisons flagged for potential DIF. The study presented here examined the generalizability of this pattern across methods for examining DIF. While the level of DIF detection differed among the four methods…
Descriptors: Comparative Analysis, Item Analysis, Test Items, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Rohm, Theresa; Carstensen, Claus H.; Fischer, Luise; Gnambs, Timo – Large-scale Assessments in Education, 2021
Background: After elementary school, students in Germany are separated into different school tracks (i.e., school types) with the aim of creating homogeneous student groups in secondary school. Consequently, the development of students' reading achievement diverges across school types. Findings on this achievement gap have been criticized as…
Descriptors: Achievement Gap, Reading Achievement, Test Bias, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Paladino, Margaret – Journal for Leadership and Instruction, 2020
The opt-out movement, a grassroots coalition of opposition to high-stakes tests that are used to sort students, evaluate teachers, and rank schools, has the largest participation on Long Island, New York, where approximately 50% of the eligible students in grades three to eight opted out of the English Language Arts (ELA) and Mathematics tests in…
Descriptors: High Stakes Tests, Parent Attitudes, Racial Differences, Ethnicity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Özkan, Yesim Özer; Güvendir, Meltem Acar – Journal of Pedagogical Research, 2021
Large scale assessment is conducted at different class levels for various purposes such as identifying student success in education, observing the impacts of educational reforms on student achievement, assessment, selection, and placement. It is expected that these tests and their items are used in education do not display different traits with…
Descriptors: Foreign Countries, Test Bias, Student Evaluation, Test Items
McLoud, Rachael – ProQuest LLC, 2019
An increasing number of parents are opting-out their children from high-stakes. Accountability systems in education have used students' test scores to measure student learning, teacher effectiveness, and school district performance. Students who are opted-out of high-stakes tests are not being evaluated by the state tests, making their level of…
Descriptors: Evaluation, High Stakes Tests, Parent Attitudes, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Yi-Jui Iva; Wilson, Mark; Irey, Robin C.; Requa, Mary K. – Language Testing, 2020
Orthographic processing -- the ability to perceive, access, differentiate, and manipulate orthographic knowledge -- is essential when learning to recognize words. Despite its critical importance in literacy acquisition, the field lacks a tool to assess this essential cognitive ability. The goal of this study was to design a computer-based…
Descriptors: Orthographic Symbols, Spelling, Word Recognition, Reading Skills
Li, Sylvia; Meyer, Patrick – NWEA, 2019
This simulation study examines the measurement precision, item exposure rates, and the depth of the MAP® Growth™ item pools under various grade-level restrictions. Unlike most summative assessments, MAP Growth allows examinees to see items from any grade level, regardless of the examinee's actual grade level. It does not limit the test to items…
Descriptors: Achievement Tests, Item Banks, Test Items, Instructional Program Divisions
Liu, Junhui; Brown, Terran; Chen, Jianshen; Ali, Usama; Hou, Likun; Costanzo, Kate – Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium working to develop next-generation assessments that more accurately, compared to previous assessments, measure student progress toward college and career readiness. The PARCC assessments include both English Language Arts/Literacy (ELA/L) and…
Descriptors: Testing, Achievement Tests, Test Items, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Goldhaber, Dan; Chaplin, Duncan Dunbar – Journal of Research on Educational Effectiveness, 2015
In an influential paper, Jesse Rothstein (2010) shows that standard value-added models (VAMs) suggest implausible and large future teacher effects on past student achievement. This is the basis of a falsification test that "appears" to indicate bias in typical VAM estimates of teacher contributions to student learning on standardized…
Descriptors: Teacher Evaluation, Teacher Effectiveness, Teacher Influence, Models
Peer reviewed Peer reviewed
Direct linkDirect link
French, Brian F.; Gotch, Chad M. – Journal of Psychoeducational Assessment, 2013
The Brigance Comprehensive Inventory of Basic Skills-II (CIBS-II) is a diagnostic battery intended for children in grades 1st through 6th. The aim of this study was to test for item invariance, or differential item functioning (DIF), of the CIBS-II across sex in the standardization sample through the use of item response theory DIF detection…
Descriptors: Gender Differences, Elementary School Students, Test Bias, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kachchaf, Rachel; Noble, Tracy; Rosebery, Ann; Wang, Yang; Warren, Beth; O'Connor, Mary Catherine – Grantee Submission, 2014
Most research on linguistic features of test items negatively impacting English language learners' (ELLs') performance has focused on lexical and syntactic features, rather than discourse features that operate at the level of the whole item. This mixed-methods study identified two discourse features in 162 multiple-choice items on a standardized…
Descriptors: English Language Learners, Science Tests, Test Items, Discourse Analysis
Partnership for Assessment of Readiness for College and Careers, 2018
The purpose of this technical report is to describe the third operational administration of the Partnership for Assessment of Readiness for College and Careers (PARCC) assessments in the 2016-2017 academic year. PARCC is a state-led consortium creating next-generation assessments that, compared to traditional K-12 assessments, more accurately…
Descriptors: College Readiness, Career Readiness, Common Core State Standards, Language Arts
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4