NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Bramley, Tom – Research Matters, 2020
The aim of this study was to compare, by simulation, the accuracy of mapping a cut-score from one test to another by expert judgement (using the Angoff method) versus the accuracy with a small-sample equating method (chained linear equating). As expected, the standard-setting method resulted in more accurate equating when we assumed a higher level…
Descriptors: Cutting Scores, Standard Setting (Scoring), Equated Scores, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Michaelides, Michalis P.; Haertel, Edward H. – Applied Measurement in Education, 2014
The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…
Descriptors: Equated Scores, Test Items, Sampling, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Duong, Minh Q.; von Davier, Alina A. – International Journal of Testing, 2012
Test equating is a statistical procedure for adjusting for test form differences in difficulty in a standardized assessment. Equating results are supposed to hold for a specified target population (Kolen & Brennan, 2004; von Davier, Holland, & Thayer, 2004) and to be (relatively) independent of the subpopulations from the target population (see…
Descriptors: Ability Grouping, Difficulty Level, Psychometrics, Statistical Analysis
Sunnassee, Devdass – ProQuest LLC, 2011
Small sample equating remains a largely unexplored area of research. This study attempts to fill in some of the research gaps via a large-scale, IRT-based simulation study that evaluates the performance of seven small-sample equating methods under various test characteristic and sampling conditions. The equating methods considered are typically…
Descriptors: Test Length, Test Format, Sample Size, Simulation
Peer reviewed Peer reviewed
Slinde, Jeffrey A.; Linn, Robert L. – Journal of Educational Measurement, 1979
The Rasch model was used to equate reading comprehension tests of widely different difficulty for three groups of fifth grade students of widely different ability. Under these extreme circumstances, the Rasch model equating was unsatisfactory. (Author/CTM)
Descriptors: Academic Ability, Bias, Difficulty Level, Equated Scores
Forster, Fred; And Others – 1978
Research on the Rasch model of test and item analysis was applied to tests constructed from item banks for reading and mathematics with respect to five practical problems for scaling items and equating test forms. The questions were: (1) Does the Rasch model yield the same scale value regardless of the student sample? (2) How many students are…
Descriptors: Achievement Tests, Difficulty Level, Elementary Secondary Education, Equated Scores
Douglass, James B. – 1981
Methods and results relevant to the introduction of item characteristic curve (ICC) models into classroom achievement testing are provided. The overall objective was to compare several common ICC models for item calibration and test equating in a classroom examination system. Parameters for the one-, two- and three-parameter logistic ICC models…
Descriptors: Academic Achievement, Comparative Analysis, Difficulty Level, Equated Scores
Wu, Margaret; Donovan, Jenny; Hutton, Penny; Lennon, Melissa – Ministerial Council on Education, Employment, Training and Youth Affairs (NJ1), 2008
In July 2001, the Ministerial Council on Education, Employment, Training and Youth Affairs (MCEETYA) agreed to the development of assessment instruments and key performance measures for reporting on student skills, knowledge and understandings in primary science. It directed the newly established Performance Measurement and Reporting Taskforce…
Descriptors: Foreign Countries, Scientific Literacy, Science Achievement, Comparative Analysis