Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 7 |
Descriptor
Statistical Analysis | 7 |
Weighted Scores | 7 |
Item Response Theory | 3 |
Scores | 3 |
Test Items | 3 |
Computation | 2 |
Equations (Mathematics) | 2 |
Probability | 2 |
Sampling | 2 |
Surveys | 2 |
Accuracy | 1 |
More ▼ |
Source
ETS Research Report Series | 7 |
Author
Qian, Jiahe | 4 |
Carlson, James E. | 1 |
Dorans, Neil J. | 1 |
Guo, Hongwen | 1 |
Jiang, Yanming | 1 |
Li, Yanmei | 1 |
von Davier, Alina A. | 1 |
Publication Type
Journal Articles | 7 |
Reports - Research | 7 |
Education Level
Elementary Education | 2 |
Grade 10 | 1 |
Grade 4 | 1 |
Grade 8 | 1 |
High Schools | 1 |
Higher Education | 1 |
Intermediate Grades | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
More ▼ |
Audience
Location
Kentucky | 1 |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
National Assessment of… | 1 |
What Works Clearinghouse Rating
Guo, Hongwen; Dorans, Neil J. – ETS Research Report Series, 2019
The Mantel-Haenszel delta difference (MH D-DIF) and the standardized proportion difference (STD P-DIF) are two observed-score methods that have been used to assess differential item functioning (DIF) at Educational Testing Service since the early 1990s. Latentvariable approaches to assessing measurement invariance at the item level have been…
Descriptors: Test Bias, Educational Testing, Statistical Analysis, Item Response Theory
Carlson, James E. – ETS Research Report Series, 2014
A little-known theorem, a generalization of Pythagoras's theorem, due to Pappus, is used to present a geometric explanation of various definitions of the contribution of component tests to their composite. I show that an unambiguous definition of the unique contribution of a component to the composite score variance is present if and only if the…
Descriptors: Geometric Concepts, Scores, Validity, Reliability
Qian, Jiahe; Jiang, Yanming; von Davier, Alina A. – ETS Research Report Series, 2013
Several factors could cause variability in item response theory (IRT) linking and equating procedures, such as the variability across examinee samples and/or test items, seasonality, regional differences, native language diversity, gender, and other demographic variables. Hence, the following question arises: Is it possible to select optimal…
Descriptors: Item Response Theory, Test Items, Sampling, True Scores
Li, Yanmei – ETS Research Report Series, 2012
In a common-item (anchor) equating design, the common items should be evaluated for item parameter drift. Drifted items are often removed. For a test that contains mostly dichotomous items and only a small number of polytomous items, removing some drifted polytomous anchor items may result in anchor sets that no longer resemble mini-versions of…
Descriptors: Scores, Item Response Theory, Equated Scores, Simulation
Qian, Jiahe – ETS Research Report Series, 2008
In survey research, sometimes the formation of groupings, or aggregations of cases on which to make an inference, are of importance. Of particular interest are the situations where the cases aggregated carry useful information that has been transferred from a sample employed in a previous study. For example, a school to be included in the sample…
Descriptors: Surveys, Models, High Schools, School Effectiveness
Qian, Jiahe – ETS Research Report Series, 2008
This study explores the use of a mapping technique to test the invariance of proficiency standards over time for state performance tests. First, the state proficiency standards are mapped onto the National Assessment of Educational Progress (NAEP) scale. Then, rather than looking at whether there is a deviation in proficiency standards directly,…
Descriptors: National Competency Tests, State Standards, Scores, Achievement Tests
Qian, Jiahe – ETS Research Report Series, 2006
Weighting and variance estimation are two statistical issues involved in survey data analysis for large-scale assessment programs such as the Higher Education Information and Communication Technology (ICT) Literacy Assessment. Because survey data are always acquired by probability sampling, to draw unbiased or almost unbiased inferences for the…
Descriptors: Weighted Scores, Sampling, Statistical Analysis, Higher Education