Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 3 |
Descriptor
Scoring | 8 |
Test Validity | 8 |
Achievement Tests | 3 |
Computer Assisted Testing | 3 |
Test Items | 3 |
Educational Assessment | 2 |
Item Analysis | 2 |
Multiple Choice Tests | 2 |
Scores | 2 |
Test Construction | 2 |
Test Reliability | 2 |
More ▼ |
Source
Educational Measurement:… | 8 |
Author
Bejar, Issac I. | 1 |
Boyer, Michelle | 1 |
Burkhardt, Amy | 1 |
Frisbie, David A. | 1 |
Guion, Robert M. | 1 |
Lottridge, Sue | 1 |
Nelson, Larry R. | 1 |
Quellmalz, Edys S. | 1 |
Wise, Steven L. | 1 |
Yen, Wendy M. | 1 |
Publication Type
Journal Articles | 8 |
Opinion Papers | 3 |
Reports - Descriptive | 3 |
Information Analyses | 2 |
Reports - Evaluative | 2 |
Education Level
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lottridge, Sue; Burkhardt, Amy; Boyer, Michelle – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open-ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows…
Descriptors: Computer Assisted Testing, Scoring, Automation, Educational Assessment
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Bejar, Issac I. – Educational Measurement: Issues and Practice, 2012
The scoring process is critical in the validation of tests that rely on constructed responses. Documenting that readers carry out the scoring in ways consistent with the construct and measurement goals is an important aspect of score validity. In this article, rater cognition is approached as a source of support for a validity argument for scores…
Descriptors: Scores, Inferences, Validity, Scoring

Quellmalz, Edys S. – Educational Measurement: Issues and Practice, 1984
A summary of the writing assessment programs reviewed in this journal is presented. The problems inherent in the programs are outlined. A coordinated research program on major problems in writing assessment is proposed as being beneficial and cost-effective. (DWH)
Descriptors: Essay Tests, Program Evaluation, Scoring, State Programs

Guion, Robert M. – Educational Measurement: Issues and Practice, 1995
This commentary discusses three essential themes in performance assessment and its scoring. First, scores should mean something. Second, performance scores should permit fair and meaningful comparisons. Third, validity-reducing errors should be minimal. Increased attention to performance assessment may overcome these problems. (SLD)
Descriptors: Educational Assessment, Performance Based Assessment, Scores, Scoring

Nelson, Larry R. – Educational Measurement: Issues and Practice, 1984
The author argues that scoring, reporting, and deriving final grades can be considerably assisted by using a computer. He also contends that the savings in time and the computer database formed will allow instructors to determine test quality and reflect on the quality of instruction. (BW)
Descriptors: Achievement Tests, Affective Objectives, Computer Assisted Testing, Educational Testing

Frisbie, David A. – Educational Measurement: Issues and Practice, 1992
Literature related to the multiple true-false (MTF) item format is reviewed. Each answer cluster of a MTF item may have several true items and the correctness of each is judged independently. MTF tests appear efficient and reliable, although they are a bit harder than multiple choice items for examinees. (SLD)
Descriptors: Achievement Tests, Difficulty Level, Literature Reviews, Multiple Choice Tests

Yen, Wendy M.; And Others – Educational Measurement: Issues and Practice, 1987
This paper discusses how to maintain the integrity of national nomative information for achievement tests when the test that is administered has been customized to satisfy local needs and is not a test that has been nationally normed. Alternative procedures for item selection and calibration are examined. (Author/LMO)
Descriptors: Achievement Tests, Elementary Secondary Education, Goodness of Fit, Item Analysis