Publication Date
| In 2026 | 0 |
| Since 2025 | 75 |
| Since 2022 (last 5 years) | 443 |
| Since 2017 (last 10 years) | 1231 |
| Since 2007 (last 20 years) | 2505 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 122 |
| Teachers | 105 |
| Researchers | 64 |
| Students | 46 |
| Administrators | 14 |
| Policymakers | 7 |
| Counselors | 3 |
| Parents | 3 |
Location
| Canada | 134 |
| Turkey | 130 |
| Australia | 123 |
| Iran | 66 |
| Indonesia | 61 |
| United Kingdom | 51 |
| Germany | 50 |
| Taiwan | 46 |
| United States | 43 |
| China | 39 |
| California | 34 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 3 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 6 |
New York State Education Dept., Albany. Div. of Evaluation.
In order to permit teachers to tailor their instruction in the introduction of the SPPED Multiple Choice Cloze to their students, this training manual provides a package of lesson plans and materials developed by the System for Pupil and Program Evaluation and Development (SPPED). The materials include Student Guides for grade 1, grades 2-3,…
Descriptors: Answer Keys, Cloze Procedure, Elementary Secondary Education, Instructional Materials
Dryden, Russell E.; Frisbie, David A. – 1975
The purpose of this study was to compare certain characteristics of multiple-choice (MC) and complex multiple-choice (CMC) achievement tests designed to measure knowledge in medical-surgical nursing. Each of 268 junior and senior nursing students from four midwestern schools responded to one of four test forms. MC items were developed by…
Descriptors: Achievement Tests, Comparative Analysis, Evaluation Methods, Higher Education
Bayroff, A.G.; And Others – 1974
This report describes an automated system for administering, scoring, and recording results of multiple-choice tests. The system consists of examinee station, proctor station, and central computer; the report describes the equipment and the programing characteristics of the respective components. The system is designed for tests tailored to the…
Descriptors: Ability, Adaptive Testing, Computer Programs, Data Processing
The Effect of Item Choice on Ability Estimation When Using a Simple Logistic Tailored Testing Model.
Reckase, Mark D. – 1975
This paper explores the effects of item choice on ability estimation when using a tailored testing procedure based on the Rasch simple logistic model. Most studies of the simple logistic model imply that ability estimates are totally independent of the items used, regardless of the testing procedure. This paper shows that the ability estimate is…
Descriptors: Ability, Achievement Tests, Adaptive Testing, Individual Differences
Waller, Michael I. – 1974
In latent trait models the standard procedure for handling the problem caused by guessing on multiple choice tests is to estimate a parameter which is intended to measure the "guessingness" inherent in an item. Birnbaum's three parameter model, which handles guessing in this manner, ignores individual differences in guessing tendency. This paper…
Descriptors: Goodness of Fit, Guessing (Tests), Individual Differences, Item Analysis
Gibson, Susanne K. – 1969
To combat difficulties of dictated spelling tests, such as unreliable scoring due to illegible writing and the possibility of clues being provided through the enunciation of words by examiners, a technique was developed for writing experimental spelling tests by computer. Additionally, the diagnostic function of spelling scales was considered…
Descriptors: Computer Oriented Programs, Methods Research, Multiple Choice Tests, Pattern Recognition
Reckase, Mark D. – 1974
An application of the two-paramenter logistic (Rasch) model to tailored testing is presented. The model is discussed along with the maximum likelihood estimation of the ability parameters given the response pattern and easiness parameter estimates for the items. The technique has been programmed for use with an interactive computer terminal. Use…
Descriptors: Ability, Adaptive Testing, Computer Assisted Instruction, Difficulty Level
Echternacht, Gary J.; And Others – 1971
This handbook presents instructions for implementing a confidence testing program in technical training situations, identification of possible areas of application, techniques for evaluating confidence information, advantages and disadvantages of confidence testing, time considerations, and problem areas. Complete instructions for "Pick-One" and…
Descriptors: Confidence Testing, Educational Diagnosis, Guessing (Tests), Measurement Techniques
Rippey, Robert M. – 1972
This paper examines confidence testing, and reasons for using confidence tests. Different scoring systems are studied in order to clarify the meaning of significance of the weights which subjects assign to confidence scored tests. (DLG)
Descriptors: Confidence Testing, Decision Making, Guessing (Tests), Multiple Choice Tests
Hanada, Takashi – 1969
This document is an English-language abstract (appropriately 1,500 words) of a thesis answering the arguments against adopting the tests from the national Educational Test Research Institute (ETRI). The author gives an account of the committees on selection methods in his own university and also analyzes responses to an opinion survey conducted on…
Descriptors: Abstracts, Admission Criteria, Aptitude Tests, College Entrance Examinations
Curry, Robert L.; Geis, Lynna
Implicit in most of the recommendations for teaching reading is the inclusion of structural analysis as a part of the instructional program for developing skills in word recognition. This study focused on the development and standardization of a criterion referenced syllabication skills test designed to evaluate the individual proficiency of…
Descriptors: College Students, Criterion Referenced Tests, Higher Education, Multiple Choice Tests
PDF pending restorationKane, Michael T.; Moloney, James M. – 1976
The Answer-Until-Correct (AUC) procedure has been proposed in order to increase the reliability of multiple-choice items. A model for examinees' behavior when they must respond to each item until they answer it correctly is presented. An expression for the reliability of AUC items, as a function of the characteristics of the item and the scoring…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewedKansup, Wanlop; Hakstian, A. Ralph – Journal of Educational Measurement, 1975
Effects of logically weighting incorrect item options in conventional tests and different scoring functions with confidence tests on reliability and validity were examined. Ninth graders took conventionally administered Verbal and Mathematical Reasoning tests, scored conventionally and by a procedure assigning degree-of-correctness weights to…
Descriptors: Comparative Analysis, Confidence Testing, Junior High School Students, Multiple Choice Tests
Peer reviewedHakstian, A. Ralph; Kansup, Wanlop – Journal of Educational Measurement, 1975
A comparison of reliability and validity was made for three testing procedures: 1) responding conventionally to Verbal Ability and Mathematical Reasoning tests; 2) using a confidence weighting response procedure with the same tests; and 3) using the elimination response method. The experimental testing procedures were not psychometrically superior…
Descriptors: Comparative Analysis, Confidence Testing, Guessing (Tests), Junior High School Students
Peer reviewedMedley, Donald M. – Journal of Teacher Education, 1978
Six measurement strategies are suggested for assessing different kinds or aspects of competencies: (1) teaching tests; (2) behavior samples; (3) teaching exercises; (4) classroom interaction simulations; (5) projected problem exercises; and (6) paper-and-pencil exercises. (MJB)
Descriptors: Competency Based Teacher Education, Evaluation Methods, Higher Education, Interaction Process Analysis


