Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 5 |
Descriptor
Accuracy | 5 |
Test Items | 5 |
Achievement Tests | 3 |
Scores | 3 |
Test Construction | 3 |
Comparative Analysis | 2 |
Equated Scores | 2 |
Foreign Countries | 2 |
International Assessment | 2 |
Item Response Theory | 2 |
Secondary School Students | 2 |
More ▼ |
Source
ETS Research Report Series | 2 |
Cambridge Assessment | 1 |
Grantee Submission | 1 |
Partnership for Assessment of… | 1 |
Author
Bramley, Tom | 1 |
Bynum, Bethany H. | 1 |
Chen, Haiwen H. | 1 |
David J. Weiss | 1 |
Deatz, Richard C. | 1 |
Dickinson, Emily R. | 1 |
Gina Biancarosa | 1 |
Joseph N. DeWeese | 1 |
Kim, Sooyeon | 1 |
Kong, Nan | 1 |
Livingston, Samuel A. | 1 |
More ▼ |
Publication Type
Numerical/Quantitative Data | 5 |
Reports - Research | 5 |
Journal Articles | 2 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Secondary Education | 2 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Audience
Location
United Kingdom (England) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Mark L. Davison; David J. Weiss; Ozge Ersan; Joseph N. DeWeese; Gina Biancarosa; Patrick C. Kennedy – Grantee Submission, 2021
MOCCA is an online assessment of inferential reading comprehension for students in 3rd through 6th grades. It can be used to identify good readers and, for struggling readers, identify those who overly rely on either a Paraphrasing process or an Elaborating process when their comprehension is incorrect. Here a propensity to over-rely on…
Descriptors: Reading Tests, Computer Assisted Testing, Reading Comprehension, Elementary School Students
Bramley, Tom – Cambridge Assessment, 2018
The aim of the research reported here was to get some idea of the accuracy of grade boundaries (cut-scores) obtained by applying the 'similar items method' described in Bramley & Wilson (2016). In this method experts identify items on the current version of a test that are sufficiently similar to items on previous versions for them to be…
Descriptors: Accuracy, Cutting Scores, Test Items, Item Analysis
Thacker, Arthur A.; Dickinson, Emily R.; Bynum, Bethany H.; Wen, Yao; Smith, Erin; Sinclair, Andrea L.; Deatz, Richard C.; Wise, Lauress L. – Partnership for Assessment of Readiness for College and Careers, 2015
The Partnership for Assessment of Readiness for College and Careers (PARCC) field tests during the spring of 2014 provided an opportunity to investigate the quality of the items, tasks, and associated stimuli. HumRRO conducted several research studies summarized in this report. Quality of test items is integral to the "Theory of Action"…
Descriptors: Achievement Tests, Test Items, Common Core State Standards, Difficulty Level
Chen, Haiwen H.; von Davier, Matthias; Yamamoto, Kentaro; Kong, Nan – ETS Research Report Series, 2015
One major issue with large-scale assessments is that the respondents might give no responses to many items, resulting in less accurate estimations of both assessed abilities and item parameters. This report studies how the types of items affect the item-level nonresponse rates and how different methods of treating item-level nonresponses have an…
Descriptors: Achievement Tests, Foreign Countries, International Assessment, Secondary School Students
Livingston, Samuel A.; Kim, Sooyeon – ETS Research Report Series, 2010
A series of resampling studies investigated the accuracy of equating by four different methods in a random groups equating design with samples of 400, 200, 100, and 50 test takers taking each form. Six pairs of forms were constructed. Each pair was constructed by assigning items from an existing test taken by 9,000 or more test takers. The…
Descriptors: Equated Scores, Accuracy, Sample Size, Sampling