Publication Date
| In 2026 | 3 |
| Since 2025 | 190 |
| Since 2022 (last 5 years) | 1069 |
| Since 2017 (last 10 years) | 2891 |
| Since 2007 (last 20 years) | 6176 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Teachers | 481 |
| Practitioners | 358 |
| Researchers | 153 |
| Administrators | 122 |
| Policymakers | 51 |
| Students | 44 |
| Parents | 32 |
| Counselors | 25 |
| Community | 15 |
| Media Staff | 5 |
| Support Staff | 3 |
| More ▼ | |
Location
| Australia | 183 |
| Turkey | 157 |
| California | 134 |
| Canada | 124 |
| New York | 118 |
| United States | 112 |
| Florida | 107 |
| China | 103 |
| Texas | 72 |
| United Kingdom | 72 |
| Japan | 70 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 5 |
| Meets WWC Standards with or without Reservations | 11 |
| Does not meet standards | 8 |
Severy, Lawrence J. – 1974
Issues relevant to the nature of attitudes are discussed. The reader is referred to works indexing a variety of existent attitude scales. The way in which one constructs, administers, scores, interprets, and presents findings of an original attitude measuring device is discussed comprehensively, and yet in a nontechnical fashion for…
Descriptors: Attitude Measures, Attitudes, Scoring, Test Construction
Toronto Univ. (Ontario). Dept. of Geology. – 1966
The MARKTF-M7 computer program, written in FORTRAN IV, scores true/false tests by comparing a control list of T/F values prepared by the instructor with those obtained from the students. The output, primarily for the use of the instructor, consists of a listing of the names of the students with their respective marks prior to the test, the test…
Descriptors: Computer Programs, Data Analysis, Educational Testing, Input Output
Toronto Univ. (Ontario). Dept. of Geology. – 1965
The MARKTF-M3 computer program, written in FORTRAN IV, scores tests (consisting of true-or-false statements about concepts or facts) by comparing the list of true or false values prepared by the instructor with those from the students. The output consists of separate reports to each student advising him of (1) his performance with respect to four…
Descriptors: Computer Programs, Data Analysis, Educational Testing, Geology
Toronto Univ. (Ontario). Dept. of Geology. – 1965
The computer program MARKTF-M6, written in FORTRAN IV, scores tests (consisting of true-or-false statements about concepts or facts) by comparing the list of true or false values prepared by the instructor with those from the students. The output consists of information to the supervisor about the performance of the students, primarily for his…
Descriptors: Computer Programs, Data Analysis, Educational Testing, Input Output
Green, Bert F., Jr. – 1972
The use of Guttman weights in scoring tests is discussed. Scores of 2,500 men on one subtest of the CEED-SAT-Verbal Test were examined using cross-validated Guttman weights. Several scores were compared, as follows: Scores obtained from cross-validated Guttman weights; Scores obtained by rounding the Guttman weights to one digit, ranging from 0 to…
Descriptors: Comparative Analysis, Reliability, Scoring Formulas, Test Results
TERWILLIGER, JAMES S. – 1966
VARIOUS ASPECTS OF THE MARKING PRACTICES OF 39 SECONDARY SCHOOL TEACHERS FROM TWO SCHOOLS IN METROPOLITAN NASHVILLE-DAVIDSON COUNTY, TENNESSEE, WERE STUDIED. SPECIAL MARKING EXERCISES CONTAINING STANDARD DATA ON HYPOTHETICAL STUDENTS WERE USED TO STUDY THE MARKING PRACTICES OF TEACHERS UNDER MORE UNIFORM CONDITIONS THAN EXIST IN THE CLASSROOM. A…
Descriptors: Grading, Inservice Teacher Education, Rating Scales, Scoring
Ellis, E. N. – 1975
Concern over the reading and writing programs in Vancouver, British Columbia Schools culminated in the establishment in June 1974 of a Task Force on English. In response to the request from the Task Force for a survey of the writing ability of Grade 11 students, a committee of English Department Heads assisted in developing an instrument and the…
Descriptors: Essay Tests, Grade 11, Scoring, Secondary Education
Peer reviewedCarroll, C. Dennis – Educational and Psychological Measurement, 1976
A computer program for item evaluation, reliability estimation, and test scoring is described. The program contains a variable format procedure allowing flexible input of responses. Achievement tests and affective scales may be analyzed. (Author)
Descriptors: Achievement Tests, Affective Measures, Computer Programs, Item Analysis
Peer reviewedHamdan, M. A.; Krutchkoff, R. G. – Journal of Experimental Education, 1975
The separation level of grades on a multiple-choice examination as a quantitative probabilistic criterion for correct classification of students by the examination was introduced by Krutchoff. (Author)
Descriptors: Educational Research, Knowledge Level, Multiple Choice Tests, Scoring Formulas
Peer reviewedSlevin, Dennis P. – Group and Organization Studies, 1978
The NASA ranking task and similar ranking activities used to demonstrate the superiority of group thinking are examined. It is argued that the current scores cannot be used to prove the superiority of group-consensus decision making in either training or research settings. (Author)
Descriptors: Decision Making, Groups, Scoring, State of the Art Reviews
Peer reviewedAustin, Joe Dan – American Mathematical Monthly, 1978
The grade on a question of an answer-until-correct test is a function of the number of responses needed to find the correct response. This note considers how to assign credit for the number of responses needed to find the correct response. (Author/MP)
Descriptors: Elementary Secondary Education, Guessing (Tests), Mathematics, Probability
Peer reviewedRowley, Glenn L.; Traub, Ross E. – Journal of Educational Measurement, 1977
The consequences of formula scoring versus number right scoring are examined in relation to the assumptions commonly made about the behavior of examinees in testing situations. The choice between the two is shown to be dependent upon having reduced error variance or unbiasedness as a goal. (Author/JKS)
Descriptors: Error of Measurement, Scoring Formulas, Statistical Bias, Test Wiseness
Peer reviewedBurket, George R. – Journal of Educational Measurement, 1987
This response to the Baglin paper (1986) points out the fallacy in inferring that inappropriate scaling procedures cause apparent discrepancies between medians and means and between means calculated using different units. (LMO)
Descriptors: Norm Referenced Tests, Scaling, Scoring, Statistical Distributions
Peer reviewedFriedland, David L.; Michael, William B. – Educational and Psychological Measurement, 1987
A sample of 153 male police officers were subjects in a test validation study with two objectives: (1) to compare reliability estimates of a 16-item objective achievement examination scored by the conventional items right formula and by four different procedures; and (2) to obtain comparative concurrent validity coefficients of scores arising from…
Descriptors: Achievement Tests, Concurrent Validity, Correlation, Police
Peer reviewedWebster, G. D.; And Others – Evaluation and the Health Professions, 1988
Whether alternative scoring strategies result in improved measurement properties of patient management problems (PMPs) was studied. Nine scoring systems (proficiency, efficiency, select, omit, data gathering, therapy, absolute, goal-oriented, and empiric expert score) were applied to 16 PMPs used in a certifying examination taken by 4,590…
Descriptors: Certification, Licensing Examinations (Professions), Multiple Choice Tests, Physicians


