Publication Date
| In 2026 | 0 |
| Since 2025 | 186 |
| Since 2022 (last 5 years) | 1065 |
| Since 2017 (last 10 years) | 2887 |
| Since 2007 (last 20 years) | 6172 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Teachers | 480 |
| Practitioners | 358 |
| Researchers | 152 |
| Administrators | 122 |
| Policymakers | 51 |
| Students | 44 |
| Parents | 32 |
| Counselors | 25 |
| Community | 15 |
| Media Staff | 5 |
| Support Staff | 3 |
| More ▼ | |
Location
| Australia | 183 |
| Turkey | 157 |
| California | 133 |
| Canada | 124 |
| New York | 118 |
| United States | 112 |
| Florida | 107 |
| China | 103 |
| Texas | 72 |
| United Kingdom | 72 |
| Japan | 70 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 5 |
| Meets WWC Standards with or without Reservations | 11 |
| Does not meet standards | 8 |
Peer reviewedCollet, Leverne S. – Journal of Educational Measurement, 1971
The purpose of this paper was to provide an empirical test of the hypothesis that elimination scores are more reliable and valid than classical corrected-for-guessing scores or weighted-choice scores. The evidence presented supports the hypothesized superiority of elimination scoring. (Author)
Descriptors: Evaluation, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
Peer reviewedMenacker, Julius; And Others – College and University, 1971
Descriptors: Academic Ability, Academic Achievement, Admission (School), Admission Criteria
Elashoff, Janet D. – Amer Educ Res J, 1969
Research carried out at the Stanford Center for Research and Development in Teaching (Stanford University), pursuant to a contract with the U.S. Office of Education under the provisions of the Cooperative Research Program.
Descriptors: Analysis of Covariance, Analysis of Variance, Correlation, Factor Analysis
Jenkins, Janet – Media in Education and Development, 1983
A description of MAIL (Micro-Assisted Learning), a microcomputer system for distance teaching which corrects mailed-in tests and generates letters commenting on each of the answers, is used to identify criteria which will help determine whether an innovation will be successful. These criteria include accessibility, operational ease, and learner…
Descriptors: Adult Education, Computer Oriented Programs, Distance Education, Foreign Countries
Peer reviewedTatsuoka, Kikumi K.; Tatsuoka, Maurice M. – Journal of Educational Measurement, 1983
This study introduces the individual consistency index (ICI), which measures the extent to which patterns of responses to parallel sets of items remain consistent over time. ICI is used as an error diagnostic tool to detect aberrant response patterns resulting from the consistent application of erroneous rules of operation. (Author/PN)
Descriptors: Achievement Tests, Algorithms, Error Patterns, Measurement Techniques
Peer reviewedChase, Clinton I. – Journal of Educational Measurement, 1983
Proposition analysis was used to equate the text base of two essays with different readability levels. Easier reading essays were given higher scores than difficult reading essays. The results appear to identify another noncontent influence on essay test scores, leaving increasingly less variance for differences in content. (Author/PN)
Descriptors: Content Analysis, Difficulty Level, Essay Tests, Higher Education
Peer reviewedMcGrath, Robert E. V.; Burkhart, Barry R. – Journal of Clinical Psychology, 1983
Assessed whether accounting for variables in the scoring of the Social Readjustment Rating Scale (SRRS) would improve the predictive validity of the inventory. Results from 107 sets of questionnaires showed that income and level of education are significant predictors of the capacity to cope with stress. (JAC)
Descriptors: Adults, Coping, Educational Attainment, Income
van den Brink, Wulfert – Evaluation in Education: International Progress, 1982
Binomial models for domain-referenced testing are compared, emphasizing the assumptions underlying the beta-binomial model. Advantages and disadvantages are discussed. A proposed item sampling model is presented which takes the effect of guessing into account. (Author/CM)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Item Sampling, Measurement Techniques
Peer reviewedStewart, Michael J.; Blair, William O. – Perceptual and Motor Skills, 1982
Raters' agreement and the relative consistency of diving judges at a boy's competition were analyzed using intraclass correlations within 16 position x type combinations. Judges' variance was significant for 5 of the 16 combinations. Point estimates were generally greater for consistency than for raters' agreement about scores. (Author/CM)
Descriptors: Analysis of Variance, Competitive Selection, Correlation, Decision Making
Pool, R. J. – Assessment in Higher Education, 1979
A computer program to process student examination results is outlined. The objectives and basic structure of the program are discussed, but details of the coding of the program are omitted. The way in which the computer can save a great deal of time and effort is addressed. (JMD)
Descriptors: Computer Programs, Display Systems, Guidelines, Higher Education
Peer reviewedSpencer, Ernest – Scottish Educational Review, 1981
Using data from the SCRE Criterion Test composition papers, the author tests the hypothesis that the bulk of inter-marker unreliability is caused by inter-marker inconsistency--which is not correctable statistically. He suggests that a shift to "consensus" standards will realize greater improvements than statistical standardizing alone.…
Descriptors: Achievement Tests, English Instruction, Essay Tests, Reliability
Atkinson, George F.; Doadt, Edward – Assessment in Higher Education, 1980
Some perceived difficulties with conventional multiple choice tests are mentioned, and a modified form of examination is proposed. It uses a computer program to award partial marks for partially correct answers, full marks for correct answers, and check for widespread misunderstanding of an item or subject. (MSE)
Descriptors: Achievement Tests, Computer Assisted Testing, Higher Education, Multiple Choice Tests
Peer reviewedBradley, Fred O.; And Others – Journal of Consulting and Clinical Psychology, 1980
No WISC-R IQ scale is immune to serious scoring errors. Inspection of the standard deviations reveals that the score an examinee receives for a given performance on WISC-R content can easily vary by six to eight IQ points. (Author)
Descriptors: Children, Diagnostic Tests, Elementary Secondary Education, Error of Measurement
Peer reviewedPease, Paul L. – Journal of Optometric Education, 1980
An inexpensive, flexible, and practical method for providing students with immediate feedback, not only on tests but also on other forms of instructional materials, is described. Latent image materials needed include: a spirit master, a transfer sheet, a spirit duplicator, and a latent image developer. (MLW)
Descriptors: Allied Health Occupations Education, Feedback, Higher Education, Optometry
Peer reviewedMunson, J. Michael – Educational and Psychological Measurement, 1980
An interval scaling procedure was used to interpret the Rokeach Value Survey. Success based on student's perceived success was significantly correlated with grade point average. The modified Rokeach discriminated more successful from less successful students at levels significantly beyond chance. (Author/CP)
Descriptors: Academic Achievement, Higher Education, Predictive Validity, Scoring


