Publication Date
In 2025 | 2 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 11 |
Since 2016 (last 10 years) | 50 |
Since 2006 (last 20 years) | 150 |
Descriptor
Standard Setting (Scoring) | 502 |
Cutting Scores | 228 |
Standards | 165 |
Elementary Secondary Education | 107 |
Test Items | 92 |
Evaluation Methods | 90 |
Academic Standards | 79 |
Scoring | 75 |
Minimum Competency Testing | 70 |
Licensing Examinations… | 66 |
Educational Assessment | 64 |
More ▼ |
Source
Author
Publication Type
Education Level
Location
Canada | 10 |
Australia | 8 |
Tennessee | 8 |
United Kingdom | 7 |
California | 4 |
Kansas | 4 |
Massachusetts | 4 |
New Jersey | 4 |
United States | 4 |
Illinois | 3 |
Michigan | 3 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Oladele, Babatunde – Online Submission, 2017
The aim of the current study is to analyse the 2014 Post UTME scores of candidates in the university of Ibadan towards the establishment of cut off using two methods of standard settings. Prospective candidates who seek admission to higher institution are often denied admission through the Post UTME exercise. There is no single recommended…
Descriptors: Foreign Countries, Standard Setting (Scoring), Cutting Scores, College Entrance Examinations
Ozarkan, Hatun Betul; Dogan, Celal Deha – Eurasian Journal of Educational Research, 2020
Purpose: This study aimed to compare the cut scores obtained by the Extended Angoff and Contrasting Groups methods for an achievement test consisting of constructed-response items. Research Methods: This study was based on survey research design. In the collection of data, the study group of the research consisted of eight mathematics teachers for…
Descriptors: Standard Setting (Scoring), Responses, Test Items, Cutting Scores
Boyer, Michelle; Dadey, Nathan; Keng, Leslie – National Center for the Improvement of Educational Assessment, 2020
This school year, every state education agency (SEA) is faced with unprecedented, COVID-19-related challenges for the implementation of 2021 statewide summative assessments. Two overarching challenges are in how tests will be administered, and how scores will be interpreted and used, with many intervening and related challenges. Test…
Descriptors: State Departments of Education, Summative Evaluation, Educational Planning, Educational Strategies
Sinclair, Andrea L., Ed.; Thacker, Arthur, Ed. – Human Resources Research Organization (HumRRO), 2019
California's Commission on Teacher Credentialing (Commission) requires all programs of preliminary multiple and single subject teacher preparation to use a Commission-approved Teaching Performance Assessment (TPA) as one of the program completion requirements for prospective teacher candidates. Three TPA models were approved by the Commission: (1)…
Descriptors: Preservice Teachers, Performance Based Assessment, Models, Credentials
Wyse, Adam E. – Practical Assessment, Research & Evaluation, 2018
One common modification to the Angoff standard-setting method is to have panelists round their ratings to the nearest 0.05 or 0.10 instead of 0.01. Several reasons have been offered as to why it may make sense to have panelists round their ratings to the nearest 0.05 or 0.10. In this article, we examine one reason that has been suggested, which is…
Descriptors: Interrater Reliability, Evaluation Criteria, Scoring Formulas, Achievement Rating
Hsiao-Hui Lin; Tzeng, Yuh-Tsuen; Chen, Hsueh-Chih; Huang, Yao-Hsuan – Reading & Writing: Journal of the Reading Association of South Africa, 2020
Background: The issue of science is seldom brought into focus because of the way developing assessments of students' multiple text reading comprehension. Objectives: This study tested the sequential mediation model of scientific multi-text reading comprehension (SMTRC) by means of structural equation modelling (SEM), and aimed to advance the…
Descriptors: Science Education, Reading Comprehension, Reading Tests, Construct Validity
Papageorgiou, Spiros; Tannenbaum, Richard J. – Language Assessment Quarterly, 2016
Although there has been substantial work on argument-based approaches to validation as well as standard-setting methodologies, it might not always be clear how standard setting fits into argument-based validity. The purpose of this article is to address this lack in the literature, with a specific focus on topics related to argument-based…
Descriptors: Standard Setting (Scoring), Language Tests, Test Validity, Test Construction
Winter, Phoebe C.; Hansen, Mark; McCoy, Michelle – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2019
In order to accurately assess the English language proficiency of special populations of English learners, student assessment programs must maintain the comparability of standard and modified assessment formats, allowing for equivalent inferences to be made across student classifications. However, given the typically small size of special…
Descriptors: English Language Learners, Language Proficiency, Student Evaluation, Evaluation Methods
Tannenbaum, Richard J.; Kannan, Priya – Educational Assessment, 2015
Angoff-based standard setting is widely used, especially for high-stakes licensure assessments. Nonetheless, some critics have claimed that the judgment task is too cognitively complex for panelists, whereas others have explicitly challenged the consistency in (replicability of) standard-setting outcomes. Evidence of consistency in item judgments…
Descriptors: Standard Setting (Scoring), Reliability, Scores, Licensing Examinations (Professions)
Wyse, Adam E. – Applied Measurement in Education, 2018
This article discusses regression effects that are commonly observed in Angoff ratings where panelists tend to think that hard items are easier than they are and easy items are more difficult than they are in comparison to estimated item difficulties. Analyses of data from two credentialing exams illustrate these regression effects and the…
Descriptors: Regression (Statistics), Test Items, Difficulty Level, Licensing Examinations (Professions)
Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim – Assessment & Evaluation in Higher Education, 2015
A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…
Descriptors: Probability, Methods, Standard Setting (Scoring), Scores
Benton, Tom; Elliott, Gill – Research Papers in Education, 2016
In recent years the use of expert judgement to set and maintain examination standards has been increasingly criticised in favour of approaches based on statistical modelling. This paper reviews existing research on this controversy and attempts to unify the evidence within a framework where expertise is utilised in the form of comparative…
Descriptors: Reliability, Expertise, Mathematical Models, Standard Setting (Scoring)
New Meridian Corporation, 2020
New Meridian Corporation has developed the "Quality Testing Standards and Criteria for Comparability Claims" (QTS) to provide guidance to states that are interested in including New Meridian content and would like to either keep reporting scores on the New Meridian Scale or use the New Meridian performance levels; that is, the state…
Descriptors: Testing, Standards, Comparative Analysis, Test Content
Harrison, George M. – Journal of Educational Measurement, 2015
The credibility of standard-setting cut scores depends in part on two sources of consistency evidence: intrajudge and interjudge consistency. Although intrajudge consistency feedback has often been provided to Angoff judges in practice, more evidence is needed to determine whether it achieves its intended effect. In this randomized experiment with…
Descriptors: Interrater Reliability, Standard Setting (Scoring), Cutting Scores, Feedback (Response)
Clauser, Jerome C.; Margolis, Melissa J.; Clauser, Brian E. – Journal of Educational Measurement, 2014
Evidence of stable standard setting results over panels or occasions is an important part of the validity argument for an established cut score. Unfortunately, due to the high cost of convening multiple panels of content experts, standards often are based on the recommendation from a single panel of judges. This approach implicitly assumes that…
Descriptors: Standard Setting (Scoring), Generalizability Theory, Replication (Evaluation), Cutting Scores