Publication Date
| In 2026 | 0 |
| Since 2025 | 61 |
| Since 2022 (last 5 years) | 385 |
| Since 2017 (last 10 years) | 1107 |
| Since 2007 (last 20 years) | 3307 |
Descriptor
| Program Evaluation | 13556 |
| Evaluation Methods | 8670 |
| Teaching Methods | 3613 |
| Program Effectiveness | 2975 |
| Elementary Secondary Education | 1882 |
| Foreign Countries | 1872 |
| Higher Education | 1638 |
| Models | 1532 |
| Evaluation Criteria | 1530 |
| Program Development | 1381 |
| Program Implementation | 1040 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 990 |
| Administrators | 410 |
| Teachers | 354 |
| Researchers | 299 |
| Policymakers | 205 |
| Students | 31 |
| Community | 29 |
| Parents | 21 |
| Media Staff | 19 |
| Counselors | 11 |
| Support Staff | 6 |
| More ▼ | |
Location
| Canada | 321 |
| California | 217 |
| Australia | 214 |
| United Kingdom | 178 |
| Illinois | 142 |
| United States | 142 |
| Florida | 139 |
| Texas | 130 |
| United Kingdom (England) | 121 |
| New York | 104 |
| North Carolina | 94 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 15 |
| Meets WWC Standards with or without Reservations | 27 |
| Does not meet standards | 17 |
Peer reviewedWerking, Richard Hume – Library Trends, 1980
Reviews various evaluation techniques used on different campuses to measure the effectiveness of library instruction programs. (FM)
Descriptors: Evaluation Methods, Library Instruction, Problems, Program Evaluation
Peer reviewedSt.Pierre, Robert G. – Evaluation Review, 1980
Factors that influence the sample size necessary for longitudinal evaluations include the nature of the evaluation questions, nature of available comparison groups, consistency of the treatment in different sites, effect size, attrition rate, significance level for statistical tests, and statistical power. (Author/GDC)
Descriptors: Evaluation Methods, Field Studies, Influences, Longitudinal Studies
Peer reviewedRevicki, Dennis A.; And Others – Journal of Curriculum Studies, 1981
The curriculum implementation models reviewed utilize observation techniques, focused interviews, questionnaires, and content analysis. Concludes that determining the relationship between the level of program implementation and program outcomes prevents curriculum from becoming ineffective. (KC)
Descriptors: Curriculum Development, Educational Research, Elementary Secondary Education, Evaluation Methods
Peer reviewedSanacore, Joseph – Reading Teacher, 1981
Provides an observation checklist that enables school administrators to discuss with teachers specific aspects of successful remedial reading instruction. (Author/FL)
Descriptors: Check Lists, Elementary Education, Evaluation Methods, Program Evaluation
Peer reviewedOry, John C.; And Others – Evaluation and Program Planning, 1978
The development and field testing of a model used to evaluate the vocational education programs in a metropolitan community college system are reported; it can also be implemented on vocational programs at other institutions. (Author/JKS)
Descriptors: Community Colleges, Evaluation Methods, Models, Program Evaluation
Peer reviewedMiddleton, Ernest J.; Cohen, Sheila – Journal of Teacher Education, 1979
A combination of survey techniques such as interviews and open-ended questions is used by the University of Kentucky to evaluate its teacher education programs. (LH)
Descriptors: Educational Research, Evaluation Methods, Interviews, Measurement Techniques
Robinson, Sharon E. – Improving Human Performance Quarterly, 1979
Encourages the use of evaluation research rather than process, impact, or comprehensive evaluation; and notes that meaningful evaluation research can occur when the true experiment is combined with longitudinal designs. (JEG)
Descriptors: Evaluation Methods, Formative Evaluation, Institutional Evaluation, Organizational Objectives
McKillip, Jack – Evaluation Quarterly, 1979
Flexibility in evaluative research design does not necessitate the abandonment of randomly constructed comparison groups. Three designs are reviewed which provide at least the option of randomization while maintaining great flexibility. The strengths and weaknesses of the designs are discussed. (Author)
Descriptors: Analysis of Variance, Control Groups, Evaluation Methods, Program Evaluation
Peer reviewedSchulberg, Herbert C.; Perloff, Robert – American Psychologist, 1979
If program evaluation is to continue its decade-long advance as a specialized field and make contributions at both the operational and policy-setting levels, evaluators must receive a basic education in the theoretical, substantive, methodological, and organizational elements pertinent to their roles. (Author)
Descriptors: Curriculum Development, Evaluation Methods, Evaluators, Human Services
Peer reviewedRiley, Roberta; Schaffer, Eugene C. – English Journal, 1976
Descriptors: Elective Courses, English Curriculum, Evaluation Methods, Program Evaluation
Brody, James F. – Education and Training of the Mentally Retarded, 1976
The Assessment Program Tool is a method of evaluating instructional programs in institutions for the retarded. (DB)
Descriptors: Evaluation Methods, Institutionalized Persons, Mental Retardation, Program Evaluation
Peer reviewedPatton, Michael Quinn – Evaluation Practice, 1996
Areas of evaluation are identified in which the formative/summative distinction appears inadequate: (1) knowledge-generating evaluations aimed at conceptual use; (2) developmental evaluation; and (3) use of evaluation to support intervention or empower participants. An argument is also made for the effectiveness of feedback that cannot be…
Descriptors: Evaluation Methods, Evaluation Utilization, Feedback, Formative Evaluation
Peer reviewedChen, Huey-tsyh – Evaluation Practice, 1996
The viewpoints of contributors to the forum are reviewed. Many of the disagreements represent fundamental differences in what constitutes evaluation. All four authors agree that the formative/summative distinction is useful and will continue to be an important concept in program evaluation, although the distinction needs greater clarification.…
Descriptors: Context Effect, Evaluation Methods, Evaluators, Formative Evaluation
Peer reviewedRiccio, James; Fitzpatrick, Jody L. – Evaluation Practice, 1997
The Greater Avenues for Independence (GAIN) welfare-to-work program in California was evaluated by the Manpower Demonstration Research Corporation (MDRC). The evaluation relied on involvement of stakeholders, examination of program implementation, flexibility, the adaptation of evaluation questions and designs during the study, use of random…
Descriptors: Evaluation Methods, Evaluators, Program Evaluation, Qualitative Research
Peer reviewedChen, Huey-tsyh – New Directions for Evaluation, 1997
Illustrative case studies support a contingency approach to mixed-method evaluation in which the evaluation team bases its selection of methods on the information to be provided, the availability of data, and the degree to which the program environment is an open or closed system. (SLD)
Descriptors: Case Studies, Context Effect, Evaluation Methods, Models


