NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: EJ1293918
Record Type: Journal
Publication Date: 2020
Pages: 5
Abstractor: As Provided
ISBN: N/A
ISSN: ISSN-1755-6031
EISSN: N/A
Available Date: N/A
Comparing Small-Sample Equating with Angoff Judgement for Linking Cut-Scores on Two Tests
Bramley, Tom
Research Matters, n29 p23-27 Spr 2020
The aim of this study was to compare, by simulation, the accuracy of mapping a cut-score from one test to another by expert judgement (using the Angoff method) versus the accuracy with a small-sample equating method (chained linear equating). As expected, the standard-setting method resulted in more accurate equating when we assumed a higher level of correlation between simulated expert judgements of item difficulty and empirical difficulty. For small-sample equating with 90 examinees per test, more accurate equating arose from using simple random sampling compared to cluster sampling at the same sample size. The overall equating error depended on where on the mark scale the cut-score was located. The simulations based on a realistic value for the correlation between judged and empirical difficulty (0.6) produced a similar overall error to small-sample equating with cluster sampling. Simulations of standard-setting based on a very optimistic correlation of 0.9 had the lowest error of all.
University of Cambridge Local Examinations Syndicate (Cambridge Assessment). The Triangle Building, Shaftesbury Road, Cambridge, United Kingdom CB2 8EA. Tel: +44-1223-553311; e-mail: info@cambridgeassessment.org.uk; Web site: https://www.cambridgeassessment.org.uk/our-research/all-published-resources/research-matters/
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Grant or Contract Numbers: N/A
Author Affiliations: N/A