NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,256 to 3,270 of 4,790 results Save | Export
Butzkamm, Wolfgang – Fremdsprachliche Unterricht, 1971
Supplement 7. (RS)
Descriptors: Achievement Tests, English (Second Language), English Literature, Language Proficiency
Peer reviewed Peer reviewed
Harke, Douglas J.; And Others – Science Education, 1972
Describes the presence of high correlations between pooled scores on two test formats when administered to 170 students in an introductory physics course. Concludes the gradability of machine scorers is comparable to that of manual grading in this repeated-measures design. (CC)
Descriptors: College Science, Data Processing, Educational Research, Evaluation Methods
Peer reviewed Peer reviewed
Reiling, Eldon; Taylor, Ryland – Journal of Educational Measurement, 1972
The hypothesis that it is unwise to change answers to multiple choice questions was tested using multiple regression analysis. The hypothesis was rejected as results showed that there are gains to be made by changing responses. (Author/CK)
Descriptors: Guessing (Tests), Hypothesis Testing, Measurement Techniques, Multiple Choice Tests
Frary, Robert B.; Zimmerman, Donald W. – Educ Psychol Meas, 1970
Descriptors: Error of Measurement, Guessing (Tests), Multiple Choice Tests, Probability
Peer reviewed Peer reviewed
Lord, Frederic M. – Journal of Educational Measurement, 1971
Modifications of administration and item arrangement of a conventional test can force a match between item difficulty levels and the ability level of the examinee. Although different examinees take different sets of items, the scoring method provides comparable scores for all. Furthermore, the test is self-scoring. These advantages are obtained…
Descriptors: Academic Ability, Difficulty Level, Measurement Techniques, Models
Clark, John L. D. – Audio-Visual Language Journal, 1971
Descriptors: Advanced Placement Programs, Aptitude Tests, Electronic Equipment, Language Skills
Achenbach, Thomas M. – J Educ Psychol, 1970
The relationships between classroom performance, ability measures, paired-associate tasks, problem solving tasks, and CART scores are investigated for Grade 5 students. The test is standardized on Grades 5-8. (DG)
Descriptors: Achievement Tests, Association Measures, Grade 5, Intelligence Tests
Peer reviewed Peer reviewed
Jarvis, Gilbert A. – NALLD Journal, 1970
Descriptors: Language Laboratories, Language Tests, Modern Languages, Multiple Choice Tests
Peer reviewed Peer reviewed
Jacobs, Stanley S. – Journal of Educational Measurement, 1971
Descriptors: Guessing (Tests), Individual Differences, Measurement Techniques, Multiple Choice Tests
Peer reviewed Peer reviewed
Veale, James R.; Foreman, Dale I. – Journal of Educational Measurement, 1983
Statistical procedures for measuring heterogeneity of test item distractor distributions, or cultural variation, are presented. These procedures are based on the notion that examinees' responses to the incorrect options of a multiple-choice test provide more information concerning cultural bias than their correct responses. (Author/PN)
Descriptors: Ethnic Bias, Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Green, Kathy E. – Educational and Psychological Measurement, 1983
This study was concerned with the reliability and validity of subjective judgments about five characteristics of multiple-choice test items from an introductory college-level astronomy test: (1) item difficulty, (2) language complexity, (3) content importance or relevance, (4) response set convergence, and (5) process complexity. (Author)
Descriptors: Achievement Tests, Astronomy, Difficulty Level, Evaluative Thinking
Peer reviewed Peer reviewed
Albanese, Mark A. – Evaluation and the Health Professions, 1982
Findings regarding formats and scoring formulas for multiple-choice test items with more than one correct response are presented. Strong cluing effects in the Type K format, increasing the correct score percentage and reducing test reliability, recommend using the Type X format. Alternative scoring methods are discussed. (Author/CM)
Descriptors: Health Occupations, Multiple Choice Tests, Professional Education, Response Style (Tests)
Peer reviewed Peer reviewed
Gross, Leon J. – Journal of Optometric Education, 1982
A critique of a variety of formats used in combined-response test items (those in which the respondent must choose the correct combination of options: a and b, all of the above, etc.) illustrates why this kind of testing is inherently flawed and should not be used in optometry examinations. (MSE)
Descriptors: Higher Education, Multiple Choice Tests, Optometry, Standardized Tests
Peer reviewed Peer reviewed
Bodner, George M. – Journal of Chemical Education, 1980
Presented are common words and phrases which are encountered in the statistical analysis of test results. Included are analysis of the mid-point, distribution of scores, calculation of scaled scores, test reliability, item analysis, and coefficients of reliability. (CS)
Descriptors: College Science, Definitions, Evaluation Methods, Higher Education
Peer reviewed Peer reviewed
Whetton, C.; Childs, R. – British Journal of Educational Psychology, 1981
Answer-until-correct (AUC) is a procedure for providing feedback during a multiple-choice test, giving an increased range of scores. The performance of secondary students on a verbal ability test using AUC procedures was compared with a group using conventional instructions. AUC scores considerably enhanced reliability but not validity.…
Descriptors: Feedback, Multiple Choice Tests, Response Style (Tests), Secondary Education
Pages: 1  |  ...  |  214  |  215  |  216  |  217  |  218  |  219  |  220  |  221  |  222  |  ...  |  320