Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 10 |
Descriptor
Test Items | 14 |
Scoring | 9 |
Standard Setting (Scoring) | 5 |
Cutting Scores | 4 |
Achievement Tests | 3 |
Difficulty Level | 3 |
Error of Measurement | 3 |
Item Analysis | 3 |
Test Construction | 3 |
Test Validity | 3 |
Validity | 3 |
More ▼ |
Source
Educational Measurement:… | 14 |
Author
Anderson, Dan | 1 |
Babcock, Ben | 1 |
Blackmore, John | 1 |
Brew, Chris | 1 |
Clauser, Brian E. | 1 |
Clauser, Jerome C. | 1 |
Dorans, Neil J. | 1 |
Frisbie, David A. | 1 |
Gerard, Libby | 1 |
Haladyna, Thomas M. | 1 |
Hein, Serge F. | 1 |
More ▼ |
Publication Type
Journal Articles | 14 |
Reports - Research | 6 |
Reports - Evaluative | 5 |
Reports - Descriptive | 3 |
Information Analyses | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
Graduate Record Examinations | 1 |
Preliminary Scholastic… | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Lewis, Jennifer; Lim, Hwanggyu; Padellaro, Frank; Sireci, Stephen G.; Zenisky, April L. – Educational Measurement: Issues and Practice, 2022
Setting cut scores on (MSTs) is difficult, particularly when the test spans several grade levels, and the selection of items from MST panels must reflect the operational test specifications. In this study, we describe, illustrate, and evaluate three methods for mapping panelists' Angoff ratings into cut scores on the scale underlying an MST. The…
Descriptors: Cutting Scores, Adaptive Testing, Test Items, Item Analysis
Skaggs, Gary; Hein, Serge F.; Wilkins, Jesse L. M. – Educational Measurement: Issues and Practice, 2020
In test-centered standard-setting methods, borderline performance can be represented by many different profiles of strengths and weaknesses. As a result, asking panelists to estimate item or test performance for a hypothetical group study of borderline examinees, or a typical borderline examinee, may be an extremely difficult task and one that can…
Descriptors: Standard Setting (Scoring), Cutting Scores, Testing Problems, Profiles
Wyse, Adam E.; Babcock, Ben – Educational Measurement: Issues and Practice, 2020
A common belief is that the Bookmark method is a cognitively simpler standard-setting method than the modified Angoff method. However, a limited amount of research has investigated panelist's ability to perform well the Bookmark method, and whether some of the challenges panelists face with the Angoff method may also be present in the Bookmark…
Descriptors: Standard Setting (Scoring), Evaluation Methods, Testing Problems, Test Items
Liu, Ou Lydia; Brew, Chris; Blackmore, John; Gerard, Libby; Madhok, Jacquie; Linn, Marcia C. – Educational Measurement: Issues and Practice, 2014
Content-based automated scoring has been applied in a variety of science domains. However, many prior applications involved simplified scoring rubrics without considering rubrics representing multiple levels of understanding. This study tested a concept-based scoring tool for content-based scoring, c-raterâ„¢, for four science items with rubrics…
Descriptors: Science Tests, Test Items, Scoring, Automation
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Margolis, Melissa J.; Mee, Janet; Clauser, Brian E.; Winward, Marcia; Clauser, Jerome C. – Educational Measurement: Issues and Practice, 2016
Evidence to support the credibility of standard setting procedures is a critical part of the validity argument for decisions made based on tests that are used for classification. One area in which there has been limited empirical study is the impact of standard setting judge selection on the resulting cut score. One important issue related to…
Descriptors: Academic Standards, Standard Setting (Scoring), Cutting Scores, Credibility
Tiffin-Richards, Simon P.; Pant, Hans Anand; Koller, Olaf – Educational Measurement: Issues and Practice, 2013
Cut-scores were set by expert judges on assessments of reading and listening comprehension of English as a foreign language (EFL), using the bookmark standard-setting method to differentiate proficiency levels defined by the Common European Framework of Reference (CEFR). Assessments contained stratified item samples drawn from extensive item…
Descriptors: Foreign Countries, English (Second Language), Language Tests, Standard Setting (Scoring)
Dorans, Neil J. – Educational Measurement: Issues and Practice, 2012
Views on testing--its purpose and uses and how its data are analyzed--are related to one's perspective on test takers. Test takers can be viewed as learners, examinees, or contestants. I briefly discuss the perspective of test takers as learners. I maintain that much of psychometrics views test takers as examinees. I discuss test takers as a…
Descriptors: Testing, Test Theory, Item Response Theory, Test Reliability
Raymond, Mark R.; Neustel, Sandra; Anderson, Dan – Educational Measurement: Issues and Practice, 2009
Examinees who take high-stakes assessments are usually given an opportunity to repeat the test if they are unsuccessful on their initial attempt. To prevent examinees from obtaining unfair score increases by memorizing the content of specific test items, testing agencies usually assign a different test form to repeat examinees. The use of multiple…
Descriptors: Test Results, Test Items, Testing, Aptitude Tests
Sykes, Robert C.; Ito, Kyoko; Wang, Zhen – Educational Measurement: Issues and Practice, 2008
Student responses to a large number of constructed response items in three Math and three Reading tests were scored on two occasions using three ways of assigning raters: single reader scoring, a different reader for each response (item-specific), and three readers each scoring a rater item block (RIB) containing approximately one-third of a…
Descriptors: Test Items, Mathematics Tests, Reading Tests, Scoring

Solano-Flores, Guillermo; Shavelson, Richard J. – Educational Measurement: Issues and Practice, 1997
Conceptual, practical, and logistical issues in the development of science performance assessments (SPAs) are discussed. The conceptual framework identifies task, response format, and scoring system as components, and conceives of SPAs as tasks that attempt to recreate conditions in which scientists work. Developing SPAs is a sophisticated effort…
Descriptors: Elementary Secondary Education, Performance Based Assessment, Science Education, Science Tests

Haladyna, Thomas M. – Educational Measurement: Issues and Practice, 1992
Context-dependent item sets, containing a subset of test items related to a passage or stimulus, are discussed. A brief review of methods for developing item sets reveals their potential for measuring high-level thinking. Theories and technologies for scoring item sets remain largely experimental. Research needs are discussed. (SLD)
Descriptors: Cognitive Tests, Educational Technology, Licensing Examinations (Professions), Problem Solving

Frisbie, David A. – Educational Measurement: Issues and Practice, 1992
Literature related to the multiple true-false (MTF) item format is reviewed. Each answer cluster of a MTF item may have several true items and the correctness of each is judged independently. MTF tests appear efficient and reliable, although they are a bit harder than multiple choice items for examinees. (SLD)
Descriptors: Achievement Tests, Difficulty Level, Literature Reviews, Multiple Choice Tests

Yen, Wendy M.; And Others – Educational Measurement: Issues and Practice, 1987
This paper discusses how to maintain the integrity of national nomative information for achievement tests when the test that is administered has been customized to satisfy local needs and is not a test that has been nationally normed. Alternative procedures for item selection and calibration are examined. (Author/LMO)
Descriptors: Achievement Tests, Elementary Secondary Education, Goodness of Fit, Item Analysis