Publication Date
In 2025 | 4 |
Since 2024 | 9 |
Since 2021 (last 5 years) | 58 |
Since 2016 (last 10 years) | 147 |
Since 2006 (last 20 years) | 496 |
Descriptor
Source
Author
Bianchini, John C. | 35 |
von Davier, Alina A. | 34 |
Dorans, Neil J. | 33 |
Kolen, Michael J. | 31 |
Loret, Peter G. | 31 |
Kim, Sooyeon | 26 |
Moses, Tim | 24 |
Livingston, Samuel A. | 22 |
Holland, Paul W. | 20 |
Puhan, Gautam | 20 |
Liu, Jinghua | 19 |
More ▼ |
Publication Type
Education Level
Location
Canada | 9 |
Australia | 8 |
Florida | 8 |
United Kingdom (England) | 8 |
Netherlands | 7 |
New York | 7 |
United States | 7 |
Israel | 6 |
Turkey | 6 |
United Kingdom | 6 |
California | 5 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 12 |
No Child Left Behind Act 2001 | 5 |
Education Consolidation… | 3 |
Hawkins Stafford Act 1988 | 1 |
Race to the Top | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 1 |
Meets WWC Standards with or without Reservations | 1 |
Use of Continuous Exponential Families to Link Forms via Anchor Tests. Research Report. ETS RR-11-11
Haberman, Shelby J.; Yan, Duanli – Educational Testing Service, 2011
Continuous exponential families are applied to linking test forms via an internal anchor. This application combines work on continuous exponential families for single-group designs and work on continuous exponential families for equivalent-group designs. Results are compared to those for kernel and equipercentile equating in the case of chained…
Descriptors: Equated Scores, Statistical Analysis, Language Tests, Mathematics Tests
Liu, Jinghua; Sinharay, Sandip; Holland, Paul W.; Curley, Edward; Feigenbaum, Miriam – Journal of Educational Measurement, 2011
This study explores an anchor that is different from the traditional miniature anchor in test score equating. In contrast to a traditional "mini" anchor that has the same spread of item difficulties as the tests to be equated, the studied anchor, referred to as a "midi" anchor (Sinharay & Holland), has a smaller spread of…
Descriptors: Equated Scores, Case Studies, College Entrance Examinations, Test Items
Wang, Wei – ProQuest LLC, 2013
Mixed-format tests containing both multiple-choice (MC) items and constructed-response (CR) items are now widely used in many testing programs. Mixed-format tests often are considered to be superior to tests containing only MC items although the use of multiple item formats leads to measurement challenges in the context of equating conducted under…
Descriptors: Equated Scores, Test Format, Test Items, Test Length
Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim – Educational and Psychological Measurement, 2014
Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…
Descriptors: Item Response Theory, Comparative Analysis, Test Items, Equated Scores
Haberman, Shelby J.; Dorans, Neil J. – Educational Testing Service, 2011
For testing programs that administer multiple forms within a year and across years, score equating is used to ensure that scores can be used interchangeably. In an ideal world, samples sizes are large and representative of populations that hardly change over time, and very reliable alternate test forms are built with nearly identical psychometric…
Descriptors: Scores, Reliability, Equated Scores, Test Construction
Puhan, Gautam – Educational Testing Service, 2011
The study evaluated the effectiveness of log-linear presmoothing (Holland & Thayer, 1987) on the accuracy of small sample chained equipercentile equatings under two conditions (i.e., using small samples that differed randomly in ability from the target population "versus" using small samples that were distinctly different from the…
Descriptors: Equated Scores, Data Analysis, Accuracy, Sample Size
Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G. – Educational Assessment, 2011
This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…
Descriptors: Computer Assisted Testing, Scoring, Test Interpretation, Equated Scores
Puhan, Gautam – Journal of Educational Measurement, 2011
The impact of log-linear presmoothing on the accuracy of small sample chained equipercentile equating was evaluated under two conditions. In the first condition the small samples differed randomly in ability from the target population. In the second condition the small samples were systematically different from the target population. Results…
Descriptors: Equated Scores, Data Analysis, Sample Size, Accuracy
Wyse, Adam E.; Reckase, Mark D. – Applied Psychological Measurement, 2011
An essential concern in the application of any equating procedure is determining whether tests can be considered equated after the tests have been placed onto a common scale. This article clarifies one equating criterion, the first-order equity property of equating, and develops a new method for evaluating equating that is linked to this…
Descriptors: Lawyers, Licensing Examinations (Professions), Testing Programs, Graphs
Keller, Lisa A.; Keller, Robert R. – Educational and Psychological Measurement, 2011
This article investigates the accuracy of examinee classification into performance categories and the estimation of the theta parameter for several item response theory (IRT) scaling techniques when applied to six administrations of a test. Previous research has investigated only two administrations; however, many testing programs equate tests…
Descriptors: Item Response Theory, Scaling, Sustainability, Classification
Lee, Eunjung; Lee, Won-Chan; Brennan, Robert L. – College Board, 2012
In almost all high-stakes testing programs, test equating is necessary to ensure that test scores across multiple test administrations are equivalent and can be used interchangeably. Test equating becomes even more challenging in mixed-format tests, such as Advanced Placement Program® (AP®) Exams, that contain both multiple-choice and constructed…
Descriptors: Test Construction, Test Interpretation, Test Norms, Test Reliability
Albano, Anthony D.; Rodriguez, Michael C. – Journal of School Psychology, 2012
Recent research on curriculum-based measurement of oral reading fluency has revealed important issues in current passage development procedures, highlighting how dissimilar passages are problematic for monitoring student progress. The purpose of this paper is to describe statistical equating as an option for achieving equivalent scores across…
Descriptors: Curriculum Based Assessment, Reading Fluency, Equated Scores, Psychometrics
Chen, Haiwen; Holland, Paul – Psychometrika, 2010
In this paper, we develop a new curvilinear equating for the nonequivalent groups with anchor test (NEAT) design under the assumption of the classical test theory model, that we name curvilinear Levine observed score equating. In fact, by applying both the kernel equating framework and the mean preserving linear transformation of…
Descriptors: Equated Scores, Test Theory, Test Construction, Guidelines
Sinharay, Sandip; Holland, Paul W. – Psychometrika, 2010
The Non-Equivalent groups with Anchor Test (NEAT) design involves "missing data" that are "missing by design." Three nonlinear observed score equating methods used with a NEAT design are the "frequency estimation equipercentile equating" (FEEE), the "chain equipercentile equating" (CEE), and the "item-response-theory observed-score-equating" (IRT…
Descriptors: Equated Scores, Item Response Theory, Tests, Data Analysis
Moses, Tim; Zhang, Wenmin – Journal of Educational and Behavioral Statistics, 2011
The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…
Descriptors: Equated Scores, Error Patterns, Evaluation Research, Statistical Analysis