Publication Date
| In 2026 | 0 |
| Since 2025 | 38 |
| Since 2022 (last 5 years) | 225 |
| Since 2017 (last 10 years) | 570 |
| Since 2007 (last 20 years) | 1377 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 110 |
| Practitioners | 107 |
| Teachers | 46 |
| Administrators | 25 |
| Policymakers | 24 |
| Counselors | 12 |
| Parents | 7 |
| Students | 7 |
| Support Staff | 4 |
| Community | 2 |
Location
| California | 61 |
| Canada | 60 |
| United States | 57 |
| Turkey | 47 |
| Australia | 43 |
| Florida | 34 |
| Germany | 26 |
| Texas | 26 |
| China | 25 |
| Netherlands | 25 |
| Iran | 22 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Engelhard, George, Jr.; Wind, Stefanie A.; Kobrin, Jennifer L.; Chajewski, Michael – College Board, 2013
The purpose of this study is to illustrate the use of explanatory models based on Rasch measurement theory to detect systematic relationships between student and item characteristics and achievement differences using differential item functioning (DIF), differential group functioning (DGF), and differential person functioning (DPF) techniques. The…
Descriptors: Test Bias, Evaluation Methods, Measurement Techniques, Writing Evaluation
Grol-Prokopczyk, Hanna; Freese, Jeremy; Hauser, Robert M. – Journal of Health and Social Behavior, 2011
This article addresses a potentially serious problem with the widely used self-rated health (SRH) survey item: that different groups have systematically different ways of using the item's response categories. Analyses based on unadjusted SRH may thus yield misleading results. The authors evaluate anchoring vignettes as a possible solution to this…
Descriptors: Vignettes, Differences, Health, Self Evaluation (Individuals)
Paek, Insu; Guo, Hongwen – Applied Psychological Measurement, 2011
This study examined how much improvement was attainable with respect to accuracy of differential item functioning (DIF) measures and DIF detection rates in the Mantel-Haenszel procedure when employing focal and reference groups with notably unbalanced sample sizes where the focal group has a fixed small sample which does not satisfy the minimum…
Descriptors: Test Bias, Accuracy, Reference Groups, Investigations
Qian, Xiaoyu; Nandakumar, Ratna; Glutting, Joseoph; Ford, Danielle; Fifield, Steve – ETS Research Report Series, 2017
In this study, we investigated gender and minority achievement gaps on 8th-grade science items employing a multilevel item response methodology. Both gaps were wider on physics and earth science items than on biology and chemistry items. Larger gender gaps were found on items with specific topics favoring male students than other items, for…
Descriptors: Item Analysis, Gender Differences, Achievement Gap, Grade 8
Huhta, Ari; Alanen, Riikka; Tarnanen, Mirja; Martin, Maisa; Hirvelä, Tuija – Language Testing, 2014
There is still relatively little research on how well the CEFR and similar holistic scales work when they are used to rate L2 texts. Using both multifaceted Rasch analyses and qualitative data from rater comments and interviews, the ratings obtained by using a CEFR-based writing scale and the Finnish National Core Curriculum scale for L2 writing…
Descriptors: Foreign Countries, Writing Skills, Second Language Learning, Finno Ugric Languages
Zwick, Rebecca – ETS Research Report Series, 2012
Differential item functioning (DIF) analysis is a key component in the evaluation of the fairness and validity of educational tests. The goal of this project was to review the status of ETS DIF analysis procedures, focusing on three aspects: (a) the nature and stringency of the statistical rules used to flag items, (b) the minimum sample size…
Descriptors: Test Bias, Sample Size, Bayesian Statistics, Evaluation Methods
Goldhaber, Dan; Chaplin, Duncan – Mathematica Policy Research, Inc., 2012
In a provocative and influential paper, Jesse Rothstein (2010) finds that standard value-added models (VAMs) suggest implausible future teacher effects on past student achievement, a finding that obviously cannot be viewed as causal. This is the basis of a falsification test (the Rothstein falsification test) that appears to indicate bias in VAM…
Descriptors: Value Added Models, Academic Achievement, Teacher Effectiveness, Correlation
Banh, My K.; Crane, Paul K.; Rhew, Isaac; Gudmundsen, Gretchen; Stoep, Ann Vander; Lyon, Aaron; McCauley, Elizabeth – Journal of Abnormal Child Psychology, 2012
As research continues to document differences in the prevalence of mental health problems such as depression across racial/ethnic groups, the issue of measurement equivalence becomes increasingly important to address. The Mood and Feelings Questionnaire (MFQ) is a widely used screening tool for child and adolescent depression. This study applied a…
Descriptors: Ethnic Groups, Adolescents, Measures (Individuals), Grade 6
Gattamorta, Karina A.; Penfield, Randall D. – Applied Measurement in Education, 2012
The study of measurement invariance in polytomous items that targets individual score levels is known as differential step functioning (DSF). The analysis of DSF requires the creation of a set of dichotomizations of the item response variable. There are two primary approaches for creating the set of dichotomizations to conduct a DSF analysis: the…
Descriptors: Measurement, Item Response Theory, Test Bias, Test Items
Pommerich, Mary – Educational Measurement: Issues and Practice, 2012
Neil Dorans has made a career of advocating for the examinee. He continues to do so in his NCME career award address, providing a thought-provoking commentary on some current trends in educational measurement that could potentially affect the integrity of test scores. Concerns expressed in the address call attention to a conundrum that faces…
Descriptors: Testing, Scores, Measurement, Test Construction
Wang, Zhen; Yao, Lihua – ETS Research Report Series, 2013
The current study used simulated data to investigate the properties of a newly proposed method (Yao's rater model) for modeling rater severity and its distribution under different conditions. Our study examined the effects of rater severity, distributions of rater severity, the difference between item response theory (IRT) models with rater effect…
Descriptors: Test Format, Test Items, Responses, Computation
Gangi, Jane M.; Reilly, Mary Ann – Language and Literacy Spectrum, 2013
The authors question the answer the national Common Core State Standards (CCSS, 2010) claims. The questions center on the validity of the new standardized tests based on the CCSS and teachers' evaluations being tied to student test scores on flawed tests. The proposed tests on the CCSS will position children as deficient, and will not recognize…
Descriptors: Foreign Countries, Standardized Tests, Test Validity, Comparative Education
Jiao, Hong; Wang, Shudong; He, Wei – Journal of Educational Measurement, 2013
This study demonstrated the equivalence between the Rasch testlet model and the three-level one-parameter testlet model and explored the Markov Chain Monte Carlo (MCMC) method for model parameter estimation in WINBUGS. The estimation accuracy from the MCMC method was compared with those from the marginalized maximum likelihood estimation (MMLE)…
Descriptors: Computation, Item Response Theory, Models, Monte Carlo Methods
Fischer, Franziska T.; Schult, Johannes; Hell, Benedikt – Journal of Educational Psychology, 2013
This is the first meta-analysis that investigates the differential prediction of undergraduate and graduate college admission tests for women and men. Findings on 130 independent samples representing 493,048 students are summarized. The underprediction of women's academic performance (d = 0.14) and the overprediction of men's academic performance…
Descriptors: Academic Achievement, Females, College Entrance Examinations, College Admission
Shea, Christine A. – ProQuest LLC, 2013
The purpose of this study was to determine whether an eighth grade state-level math assessment contained items that function differentially (DIF) for English Learner students (EL) as compared to English Only students (EO) and if so, what factors might have caused DIF. To determine this, Differential Item Functioning (DIF) analysis was employed.…
Descriptors: Item Response Theory, English Language Learners, Grade 8, Mathematics Tests

Peer reviewed
Direct link
