Publication Date
| In 2026 | 3 |
| Since 2025 | 190 |
| Since 2022 (last 5 years) | 1069 |
| Since 2017 (last 10 years) | 2891 |
| Since 2007 (last 20 years) | 6176 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Teachers | 481 |
| Practitioners | 358 |
| Researchers | 153 |
| Administrators | 122 |
| Policymakers | 51 |
| Students | 44 |
| Parents | 32 |
| Counselors | 25 |
| Community | 15 |
| Media Staff | 5 |
| Support Staff | 3 |
| More ▼ | |
Location
| Australia | 183 |
| Turkey | 157 |
| California | 134 |
| Canada | 124 |
| New York | 118 |
| United States | 112 |
| Florida | 107 |
| China | 103 |
| Texas | 72 |
| United Kingdom | 72 |
| Japan | 70 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 5 |
| Meets WWC Standards with or without Reservations | 11 |
| Does not meet standards | 8 |
Bracht, Glenn; Hopkins, Kenneth – Educ Psychol Meas, 1970
The extent to which essay and objective tests measure the same or different abilities when instruction and content remain constant is investigated. (PR)
Descriptors: Achievement Tests, Measurement Instruments, Measurement Techniques, Objective Tests
Peer reviewedHaley, Harold B.; And Others – Journal of Medical Education, 1971
Descriptors: Academic Aptitude, Admission (School), Biographical Inventories, Measurement Techniques
Peer reviewedHartnett, Rodney T. – Journal of Educational Measurement, 1971
Alternative scoring methods yield essentially the same information, including scale intercorrelations and validity. Reasons for preferring the traditional psychometric scoring technique are offered. (Author/AG)
Descriptors: College Environment, Comparative Analysis, Correlation, Item Analysis
Burnette, Richard R. – College Board Review, 1971
Descriptors: Advanced Placement, Credit Courses, Higher Education, Recruitment
Peer reviewedCureton, Edward E. – Educational and Psychological Measurement, 1971
A rebuttal of Frary's 1969 article in Educational and Psychological Measurement. (MS)
Descriptors: Error of Measurement, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
Validity and Likability Ratings for Three Scoring Instructions for a Multiple-Choice Vocabulary Test
Peer reviewedWaters, Carrie Wherry; Waters, Lawrence K. – Educational and Psychological Measurement, 1971
Descriptors: Guessing (Tests), Multiple Choice Tests, Response Style (Tests), Scoring Formulas
Peer reviewedPate, Robert H., Jr.; Nichols, William R. – Psychology in the Schools, 1971
A system for evaluating the human figure drawings of children aged 5 to 12 is presented. (Author)
Descriptors: Children, Diagnostic Tests, Measurement, Pictorial Stimuli
Peer reviewedAllen, Mary J.; And Others – Perceptual and Motor Skills, 1982
Adults took the Rod and Frame, Portable Rod and Frame, and Embedded Figures Tests. Absolute and algebraic frame-effect scores were more reliable and valid than rod-effect algebraic scores. Correlations with the Embedded Figures Test were so low that the interchangeability of these field articulation measures is questionable. (Author/RD)
Descriptors: Adults, Cognitive Style, Correlation, Measurement Techniques
Peer reviewedZatz, Joel L. – American Journal of Pharmaceutical Education, 1982
A method for computer grading pharmaceutical calculations exams in which students convert their answers into scientific notation and enter their solutions onto a mark sense form is described. A table is generated and then posted listing student identification numbers, exam grades, and which problems were missed. (Author/MLW)
Descriptors: Computation, Computer Assisted Testing, Computer Programs, Grading
Peer reviewedWillson, Victor L. – Educational and Psychological Measurement, 1982
The Serlin-Kaiser procedure is used to complete a principal components solution for scoring weights for all options of a given item. Coefficient alpha is maximized for a given multiple choice test. (Author/GK)
Descriptors: Analysis of Covariance, Factor Analysis, Multiple Choice Tests, Scoring Formulas
Raffeld, Paul – New Directions for Testing and Measurement, 1980
Students who meet with repeated failure at the beginning of a test are likely to lose motivation to continue to interact with each item. Therefore, it seems better to recommend testing out-of-level for those students scoring at or below the chance score mean. (RL)
Descriptors: Compensatory Education, Elementary Secondary Education, Guessing (Tests), Scoring
Peer reviewedRokosz, Francis M. – Journal of Physical Education, Recreation & Dance, 1981
Point systems in intramural competition do not accurately reflect the participation value of each sport. A system is presented in which point values are automatically adjusted to differences in group participation from sport to sport. This system is based on the set number of participants per sport and on the number of contests played per season.…
Descriptors: Achievement Rating, Competition, Goal Orientation, Intramural Athletics
Peer reviewedAustin, Joe Dan – Psychometrika, 1981
On distractor-identification tests students mark as many distractors as possible on each test item. A grading scale is developed for this type testing. The score is optimal in that it yields an unbiased estimate of the student's score as if no guessing had occurred. (Author/JKS)
Descriptors: Guessing (Tests), Item Analysis, Measurement Techniques, Scoring Formulas
Peer reviewedRakowski, William; And Others – Perceptual and Motor Skills, 1980
Techniques for obtaining time perspective data were examined. Undergraduates responded to a questionnaire containing one of three formats for reporting anticipated future life-events, varying in structure imposed on respondents. Temporal estimates of life-event occurence were coded using two procedures, both permitting a near and a far value.…
Descriptors: Adults, Attitude Measures, College Students, Expectation
Peer reviewedCross, Lawrence H.; And Others – Journal of Experimental Education, 1980
Use of choice-weighted scores as a basis for assigning grades in college courses was investigated. Reliability and validity indices offer little to recommend either type of choice-weighted scoring over number-right scoring. The potential for choice-weighted scoring to enhance the teaching/testing process is discussed. (Author/GK)
Descriptors: Credit Courses, Grading, Higher Education, Multiple Choice Tests


