Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
Source
Educational Testing Service | 4 |
Author
Baron, Patricia | 1 |
Dorans, Neil J. | 1 |
Haberman, Shelby J. | 1 |
Jia, Helena | 1 |
Qu, Yanxuan | 1 |
Rose, Norman | 1 |
Sinharay, Sandip | 1 |
Tan, Xuan | 1 |
Weeks, Jonathan | 1 |
Xiang, Bihua | 1 |
Xu, Xueli | 1 |
More ▼ |
Publication Type
Numerical/Quantitative Data | 4 |
Reports - Research | 3 |
Reports - Evaluative | 1 |
Education Level
Elementary Education | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
California | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Weeks, Jonathan; Baron, Patricia – Educational Testing Service, 2021
The current project, Exploring Math Education Relations by Analyzing Large Data Sets (EMERALDS) II, is an attempt to identify specific Common Core State Standards procedural, conceptual, and problem-solving competencies in earlier grades that best predict success in algebraic areas in later grades. The data for this study include two cohorts of…
Descriptors: Mathematics Education, Common Core State Standards, Problem Solving, Mathematics Tests
Tan, Xuan; Xiang, Bihua; Dorans, Neil J.; Qu, Yanxuan – Educational Testing Service, 2010
The nature of the matching criterion (usually the total score) in the study of differential item functioning (DIF) has been shown to impact the accuracy of different DIF detection procedures. One of the topics related to the nature of the matching criterion is whether the studied item should be included. Although many studies exist that suggest…
Descriptors: Test Bias, Test Items, Item Response Theory
Rose, Norman; von Davier, Matthias; Xu, Xueli – Educational Testing Service, 2010
Large-scale educational surveys are low-stakes assessments of educational outcomes conducted using nationally representative samples. In these surveys, students do not receive individual scores, and the outcome of the assessment is inconsequential for respondents. The low-stakes nature of these surveys, as well as variations in average performance…
Descriptors: Item Response Theory, Educational Assessment, Data Analysis, Case Studies
Sinharay, Sandip; Haberman, Shelby J.; Jia, Helena – Educational Testing Service, 2011
Standard 3.9 of the "Standards for Educational and Psychological Testing" (American Educational Research Association, American Psychological Association, & National Council for Measurement in Education, 1999) demands evidence of model fit when an item response theory (IRT) model is used to make inferences from a data set. We applied two recently…
Descriptors: Item Response Theory, Goodness of Fit, Statistical Analysis, Language Tests