NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)2
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
SAT (College Admission Test)1
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Hendrickson, Amy; Patterson, Brian; Ewing, Maureen – College Board, 2010
The psychometric considerations and challenges associated with including constructed response items on tests are discussed along with how these issues affect the form assembly specifications for mixed-format exams. Reliability and validity, security and fairness, pretesting, content and skills coverage, test length and timing, weights, statistical…
Descriptors: Multiple Choice Tests, Test Format, Test Construction, Test Validity
Peer reviewed Peer reviewed
Raffeld, Paul – Journal of Educational Measurement, 1975
Results support the contention that a Guttman-weighted objective test can have psychometric properties that are superior to those of its unweighted counterpart, as long as omissions do not exist or are assigned a value equal to the mean of the k item alternative weights. (Author/BJG)
Descriptors: Multiple Choice Tests, Predictive Validity, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Hendrickson, Gerry F. – Journal of Educational Measurement, 1971
Descriptors: Correlation, Guessing (Tests), Multiple Choice Tests, Sex Differences
Peer reviewed Peer reviewed
Krauft, Conrad C.; Beggs, Donald L. – Journal of Experimental Education, 1973
The purpose of the study was to determine whether a subject weighted (SW) multiple-choice test taking procedure would result in higher and more reliable scores than the conventional (C) multiple-choice test taking procedure in general at different levels of risk taking. (Author)
Descriptors: Attitudes, Educational Research, Multiple Choice Tests, Questionnaires
Peer reviewed Peer reviewed
Collet, Leverne S. – Journal of Educational Measurement, 1971
The purpose of this paper was to provide an empirical test of the hypothesis that elimination scores are more reliable and valid than classical corrected-for-guessing scores or weighted-choice scores. The evidence presented supports the hypothesized superiority of elimination scoring. (Author)
Descriptors: Evaluation, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Cross, Lawrence H.; Frary, Robert B. – Educational and Psychological Measurement, 1978
The reliability and validity of multiple choice test scores resutling from empirical choice-weighting of alternatives was examined under two conditions: (1) examinees were told not to guess unless choices could be eliminated; and (2) examinees were told the total score would be the total number correct. Results favored the choice-weighting…
Descriptors: Guessing (Tests), Higher Education, Multiple Choice Tests, Response Style (Tests)
Peer reviewed Peer reviewed
Kane, Michael; Moloney, James – Applied Psychological Measurement, 1978
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Echternacht, Gary – Educational and Psychological Measurement, 1976
Compares various item option scoring methods with respect to coefficient alpha and a concurrent validity coefficient. Scoring methods compared were: formula scoring, a priori scoring, empirical scoring with an internal criterion, and two modifications of formula scoring. The empirically determined scoring system is seen as superior. (RC)
Descriptors: Aptitude Tests, Multiple Choice Tests, Response Style (Tests), Scoring Formulas
Peer reviewed Peer reviewed
Reilly, Richard R.; Jackson, Rex – Journal of Educational Measurement, 1973
The present study suggests that although the reliability of an academic aptitude test given under formula-score condition can be increased substantially through empirical option weighting, much of the increase is due to the capitalization of the keying procedure on omitting tendencies which are reliable but not valid. (Author)
Descriptors: Aptitude Tests, Correlation, Factor Analysis, Item Sampling
Peer reviewed Peer reviewed
Jacobs, Stanley S. – Journal of Educational Measurement, 1971
Descriptors: Guessing (Tests), Individual Differences, Measurement Techniques, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sykes, Robert C.; Hou, Liling – Applied Measurement in Education, 2003
Weighting responses to Constructed-Response (CR) items has been proposed as a way to increase the contribution these items make to the test score when there is insufficient testing time to administer additional CR items. The effect of various types of weighting items of an IRT-based mixed-format writing examination was investigated.…
Descriptors: Item Response Theory, Weighted Scores, Responses, Scores
Peer reviewed Peer reviewed
Panackal, Abraham A.; Heft, Carl S. – Educational and Psychological Measurement, 1978
Two multiple choice forms of two cloze reading tests were developed from responses to the cloze forms by college undergraduates. These tests were investigated using the original keys, empirical keys, and option weighted keys. Reliability and validity data are reported. (Author/JKS)
Descriptors: Cloze Procedure, Higher Education, Multiple Choice Tests, Reading Tests
Peer reviewed Peer reviewed
Kansup, Wanlop; Hakstian, A. Ralph – Journal of Educational Measurement, 1975
Effects of logically weighting incorrect item options in conventional tests and different scoring functions with confidence tests on reliability and validity were examined. Ninth graders took conventionally administered Verbal and Mathematical Reasoning tests, scored conventionally and by a procedure assigning degree-of-correctness weights to…
Descriptors: Comparative Analysis, Confidence Testing, Junior High School Students, Multiple Choice Tests
Sabers, Darrell L.; White, Gordon W. – 1971
A procedure for scoring multiple-choice tests by assigning different weights to every option of a test item is investigated. The weighting method used was based on that proposed by Davis, which involves taking the upper and lower 27% of a sample, according to some criterion measure, and using the percentages of these groups marking an item option…
Descriptors: Computer Oriented Programs, Item Analysis, Measurement Techniques, Multiple Choice Tests
Donlon, Thomas F. – 1975
This study empirically determined the optimizing weight to be applied to the Wrongs Total Score in scoring rubrics of the general form = R - kW, where S is the Score, R the Rights Total, k the weight and W the Wrongs Total, if reliability is to be maximized. As is well known, the traditional formula score rests on a theoretical framework which is…
Descriptors: Achievement Tests, Comparative Analysis, Guessing (Tests), Multiple Choice Tests
Previous Page | Next Page ยป
Pages: 1  |  2