Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 6 |
Descriptor
Evaluation Methods | 6 |
Test Bias | 6 |
Statistical Analysis | 4 |
Error of Measurement | 3 |
Test Items | 3 |
Correlation | 2 |
Equated Scores | 2 |
Reading Achievement | 2 |
Scores | 2 |
Simulation | 2 |
Ability Grouping | 1 |
More ▼ |
Source
ETS Research Report Series | 6 |
Author
Braun, Henry | 1 |
Carey, Jill | 1 |
Curley, Edward | 1 |
Dorans, Neil J. | 1 |
Hill, Yao Zhang | 1 |
Holland, Paul | 1 |
Liu, Jinghua | 1 |
Liu, Ou Lydia | 1 |
Mapuranga, Raymond | 1 |
Middleton, Kyndra | 1 |
Sinharay, Sandip | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 6 |
Numerical/Quantitative Data | 1 |
Education Level
Elementary Education | 1 |
Grade 4 | 1 |
Higher Education | 1 |
Intermediate Grades | 1 |
Postsecondary Education | 1 |
Audience
Location
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
SAT (College Admission Test) | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Liu, Jinghua; Zu, Jiyun; Curley, Edward; Carey, Jill – ETS Research Report Series, 2014
The purpose of this study is to investigate the impact of discrete anchor items versus passage-based anchor items on observed score equating using empirical data.This study compares an "SAT"® critical reading anchor that contains more discrete items proportionally, compared to the total tests to be equated, to another anchor that…
Descriptors: Equated Scores, Test Items, College Entrance Examinations, Comparative Analysis
Zwick, Rebecca – ETS Research Report Series, 2012
Differential item functioning (DIF) analysis is a key component in the evaluation of the fairness and validity of educational tests. The goal of this project was to review the status of ETS DIF analysis procedures, focusing on three aspects: (a) the nature and stringency of the statistical rules used to flag items, (b) the minimum sample size…
Descriptors: Test Bias, Sample Size, Bayesian Statistics, Evaluation Methods
Hill, Yao Zhang; Liu, Ou Lydia – ETS Research Report Series, 2012
This study investigated the effect of the interaction between test takers' background knowledge and language proficiency on their performance on the "TOEFL iBT"® reading section. Test takers with the target content background knowledge (the focal groups) and those without (the reference groups) were identified for each of the 5 selected…
Descriptors: Language Tests, Second Language Learning, English (Second Language), Internet
Braun, Henry; Zhang, Jinming; Vezzu, Sailesh – ETS Research Report Series, 2008
At present, although the percentages of students with disabilities (SDs) and/or students who are English language learners (ELL) excluded from a NAEP administration are reported, no statistical adjustment is made for these excluded students in the calculation of NAEP results. However, the exclusion rates for both SD and ELL students vary…
Descriptors: Research Methodology, Computation, Disabilities, English Language Learners
Mapuranga, Raymond; Dorans, Neil J.; Middleton, Kyndra – ETS Research Report Series, 2008
In many practical settings, essentially the same differential item functioning (DIF) procedures have been in use since the late 1980s. Since then, examinee populations have become more heterogeneous, and tests have included more polytomously scored items. This paper summarizes and classifies new DIF methods and procedures that have appeared since…
Descriptors: Test Bias, Educational Development, Evaluation Methods, Statistical Analysis
Sinharay, Sandip; Holland, Paul – ETS Research Report Series, 2006
It is a widely held belief that anchor tests should be miniature versions (i.e., minitests), with respect to content and statistical characteristics of the tests being equated. This paper examines the foundations for this belief. It examines the requirement of statistical representativeness of anchor tests that are content representative. The…
Descriptors: Test Items, Equated Scores, Evaluation Methods, Difficulty Level