Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 7 |
Descriptor
Scoring | 7 |
Test Items | 7 |
Scores | 4 |
Tests | 4 |
Test Format | 3 |
Testing | 3 |
Computer Assisted Testing | 2 |
Evaluation Methods | 2 |
Item Response Theory | 2 |
Multiple Choice Tests | 2 |
Probability | 2 |
More ▼ |
Source
Practical Assessment,… | 7 |
Author
Buckendahl, Chad W. | 1 |
Davis-Becker, Susan L. | 1 |
Deschênes, Marie-France | 1 |
Dionne, Éric | 1 |
Dorion, Michelle | 1 |
Eckerly, Carol | 1 |
Foley, Brett P. | 1 |
Grondin, Julie | 1 |
Judd, Wallace | 1 |
Lynch, Sarah | 1 |
Rudner, Lawrence M. | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 3 |
Reports - Descriptive | 2 |
Reports - Evaluative | 2 |
Education Level
Audience
Location
Canada | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Deschênes, Marie-France; Dionne, Éric; Dorion, Michelle; Grondin, Julie – Practical Assessment, Research & Evaluation, 2023
The use of the aggregate scoring method for scoring concordance tests requires the weighting of test items to be derived from the performance of a group of experts who take the test under the same conditions as the examinees. However, the average score of experts constituting the reference panel remains a critical issue in the use of these tests.…
Descriptors: Scoring, Tests, Evaluation Methods, Test Items
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Foley, Brett P. – Practical Assessment, Research & Evaluation, 2016
There is always a chance that examinees will answer multiple choice (MC) items correctly by guessing. Design choices in some modern exams have created situations where guessing at random through the full exam--rather than only for a subset of items where the examinee does not know the answer--can be an effective strategy to pass the exam. This…
Descriptors: Guessing (Tests), Multiple Choice Tests, Case Studies, Test Construction
Buckendahl, Chad W.; Davis-Becker, Susan L. – Practical Assessment, Research & Evaluation, 2012
The consequences associated with the uses and interpretations of scores for many credentialing testing programs have important implications for a range of stakeholders. Within licensure settings specifically, results from examination programs are often one of the final steps in the process of assessing whether individuals will be allowed to enter…
Descriptors: Licensing Examinations (Professions), Test Items, Dentistry, Minimum Competency Testing
Rudner, Lawrence M. – Practical Assessment, Research & Evaluation, 2009
This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…
Descriptors: Classification, Scoring, Item Response Theory, Measurement
Judd, Wallace – Practical Assessment, Research & Evaluation, 2009
Over the past twenty years in performance testing a specific item type with distinguishing characteristics has arisen time and time again. It's been invented independently by dozens of test development teams. And yet this item type is not recognized in the research literature. This article is an invitation to investigate the item type, evaluate…
Descriptors: Test Items, Test Format, Evaluation, Item Analysis