Publication Date
| In 2026 | 0 |
| Since 2025 | 7 |
| Since 2022 (last 5 years) | 42 |
| Since 2017 (last 10 years) | 126 |
| Since 2007 (last 20 years) | 479 |
Descriptor
Source
Author
| Bianchini, John C. | 35 |
| von Davier, Alina A. | 34 |
| Dorans, Neil J. | 33 |
| Kolen, Michael J. | 31 |
| Loret, Peter G. | 31 |
| Kim, Sooyeon | 26 |
| Moses, Tim | 24 |
| Livingston, Samuel A. | 22 |
| Holland, Paul W. | 20 |
| Puhan, Gautam | 20 |
| Liu, Jinghua | 19 |
| More ▼ | |
Publication Type
Education Level
Location
| Canada | 9 |
| Australia | 8 |
| Florida | 8 |
| United Kingdom (England) | 8 |
| Netherlands | 7 |
| New York | 7 |
| United States | 7 |
| Israel | 6 |
| Turkey | 6 |
| United Kingdom | 6 |
| California | 5 |
| More ▼ | |
Laws, Policies, & Programs
| Elementary and Secondary… | 12 |
| No Child Left Behind Act 2001 | 5 |
| Education Consolidation… | 3 |
| Hawkins Stafford Act 1988 | 1 |
| Race to the Top | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
Han, Kyung T.; Wells, Craig S.; Hambleton, Ronald K. – Practical Assessment, Research & Evaluation, 2015
In item response theory test scaling/equating with the three-parameter model, the scaling coefficients A and B have no impact on the c-parameter estimates of the test items since the cparameter estimates are not adjusted in the scaling/equating procedure. The main research question in this study concerned how serious the consequences would be if…
Descriptors: Item Response Theory, Monte Carlo Methods, Scaling, Test Items
Winters, Marcus A. – Manhattan Institute for Policy Research, 2017
Critics of charter schools in New York City, America's largest school district, often allege that charters score better on standardized tests, on average, than traditional public schools because charters "cream-skim" (i.e., attract) the brightest, most motivated, students. Yet this accusation neglects the fact that not all traditional…
Descriptors: Charter Schools, Public Schools, School Effectiveness, Success
Walstad, William B.; Miller, Laurie A. – Journal of Economic Education, 2016
Survey results from a national sample of economics instructors describe the grading policies and practices in principles of economics courses. The survey results provide insights about absolute and relative grading systems used by instructors, the course components and their weights that determine grades, and the type of assessment items used for…
Descriptors: Grades (Scholastic), Grading, Economics Education, Educational Policy
New Jersey Department of Education, 2016
On March 22, 2016, the New Jersey Department of Education ("the Department") published a broadcast memo sharing secure district access to 2014-15 median Student Growth Percentile (mSGP) data for all qualifying teachers. These data describe student growth from the last school year, and comprise 10% of qualifying teachers' 2014-15…
Descriptors: Achievement Gains, Outcome Measures, Teacher Qualifications, Equated Scores
LaFlair, Geoffrey T.; Isbell, Daniel; May, L. D. Nicolas; Gutierrez Arvizu, Maria Nelly; Jamieson, Joan – Language Testing, 2017
Language programs need multiple test forms for secure administrations and effective placement decisions, but can they have confidence that scores on alternate test forms have the same meaning? In large-scale testing programs, various equating methods are available to ensure the comparability of forms. The choice of equating method is informed by…
Descriptors: Language Tests, Equated Scores, Testing Programs, Comparative Analysis
Kabasakal, Kübra Atalay; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2015
This study examines the effect of differential item functioning (DIF) items on test equating through multilevel item response models (MIRMs) and traditional IRMs. The performances of three different equating models were investigated under 24 different simulation conditions, and the variables whose effects were examined included sample size, test…
Descriptors: Test Bias, Equated Scores, Item Response Theory, Simulation
Egberink, Iris J. L.; Meijer, Rob R.; Tendeiro, Jorge N. – Educational and Psychological Measurement, 2015
A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many…
Descriptors: Personality Measures, Computer Assisted Testing, Measurement, Test Items
Cao, Yi; Lu, Ru; Tao, Wei – ETS Research Report Series, 2014
The local item independence assumption underlying traditional item response theory (IRT) models is often not met for tests composed of testlets. There are 3 major approaches to addressing this issue: (a) ignore the violation and use a dichotomous IRT model (e.g., the 2-parameter logistic [2PL] model), (b) combine the interdependent items to form a…
Descriptors: Item Response Theory, Equated Scores, Test Items, Simulation
Strietholt, Rolf; Rosén, Monica – Measurement: Interdisciplinary Research and Perspectives, 2016
Since the start of the new millennium, international comparative large-scale studies have become one of the most well-known areas in the field of education. However, the International Association for the Evaluation of Educational Achievement (IEA) has already been conducting international comparative studies for about half a century. The present…
Descriptors: Reading Tests, Comparative Analysis, Comparative Education, Trend Analysis
Sadler, Philip M.; Sonnert, Gerhard; Coyle, Harold P.; Miller, Kelly A. – Educational Assessment, 2016
The psychometrically sound development of assessment instruments requires pilot testing of candidate items as a first step in gauging their quality, typically a time-consuming and costly effort. Crowdsourcing offers the opportunity for gathering data much more quickly and inexpensively than from most targeted populations. In a simulation of a…
Descriptors: Test Items, Test Construction, Psychometrics, Biological Sciences
Northwest Evaluation Association, 2016
Northwest Evaluation Association™ (NWEA™) is committed to providing partners with useful tools to help make inferences from Measures of Academic Progress® (MAP®) interim assessment scores. One important tool is the concordance table between MAP and state summative assessments. Concordance tables have been used for decades to relate scores on…
Descriptors: Tables (Data), Benchmarking, Scoring Formulas, Scores
Humphry, Stephen M.; McGrane, Joshua A. – Australian Educational Researcher, 2015
This paper presents a method for equating writing assessments using pairwise comparisons which does not depend upon conventional common-person or common-item equating designs. Pairwise comparisons have been successfully applied in the assessment of open-ended tasks in English and other areas such as visual art and philosophy. In this paper,…
Descriptors: Writing Evaluation, Evaluation Methods, Comparative Analysis, Writing Tests
González, B. Jorge; von Davier, Matthias – Journal of Educational Measurement, 2013
Based on Lord's criterion of equity of equating, van der Linden (this issue) revisits the so-called local equating method and offers alternative as well as new thoughts on several topics including the types of transformations, symmetry, reliability, and population invariance appropriate for equating. A remarkable aspect is to define equating…
Descriptors: Equated Scores, Statistical Analysis, Models, Statistical Inference
Keller, Lisa A.; Hambleton, Ronald K. – Journal of Educational Measurement, 2013
Due to recent research in equating methodologies indicating that some methods may be more susceptible to the accumulation of equating error over multiple administrations, the sustainability of several item response theory methods of equating over time was investigated. In particular, the paper is focused on two equating methodologies: fixed common…
Descriptors: Item Response Theory, Scaling, Test Format, Equated Scores
Ali, Usama S.; Walker, Michael E. – ETS Research Report Series, 2014
Two methods are currently in use at Educational Testing Service (ETS) for equating observed item difficulty statistics. The first method involves the linear equating of item statistics in an observed sample to reference statistics on the same items. The second method, or the item response curve (IRC) method, involves the summation of conditional…
Descriptors: Difficulty Level, Test Items, Equated Scores, Causal Models

Peer reviewed
Direct link
