Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 11 |
Descriptor
Evaluation Problems | 23 |
Measurement Techniques | 23 |
Program Evaluation | 23 |
Evaluation Methods | 20 |
Program Effectiveness | 8 |
Educational Assessment | 7 |
Models | 7 |
Measurement | 6 |
Educational Policy | 5 |
Educational Research | 5 |
Elementary Secondary Education | 5 |
More ▼ |
Source
Author
Andru, Peter | 1 |
Becker, Heather | 1 |
Benjamin, Lehn M. | 1 |
Botchkarev, Alexei | 1 |
Chouinard, Jill Anne | 1 |
Conrad, Kendon J. | 1 |
Cozzens, Susan E. | 1 |
Daigneault, Pierre-Marc | 1 |
Davey, Tim L. | 1 |
Davis, Rita, Ed. | 1 |
Devos, Geert | 1 |
More ▼ |
Publication Type
Education Level
Elementary Secondary Education | 5 |
Higher Education | 4 |
Postsecondary Education | 2 |
Adult Education | 1 |
Audience
Practitioners | 1 |
Teachers | 1 |
Laws, Policies, & Programs
Education Consolidation… | 1 |
Government Performance and… | 1 |
No Child Left Behind Act 2001 | 1 |
Stewart B McKinney Homeless… | 1 |
Assessments and Surveys
General Educational… | 1 |
What Works Clearinghouse Rating
Merchie, Emmelien; Tuytens, Melissa; Devos, Geert; Vanderlinde, Ruben – Research Papers in Education, 2018
Evaluating teachers' professional development initiatives (PDI) is one of the main challenges for the teacher professionalisation field. Although different studies have focused on the effectiveness of PDI, the obtained effects and evaluative methods have been found to be widely divergent. By means of a narrative review, this study provides an…
Descriptors: Program Evaluation, Program Effectiveness, Faculty Development, Teacher Education Programs
Chouinard, Jill Anne – American Journal of Evaluation, 2013
Evaluation occurs within a specific context and is influenced by the economic, political, historical, and social forces that shape that context. The culture of evaluation is thus very much embedded in the culture of accountability that currently prevails in public sector institutions, policies, and program. As such, our understanding of the…
Descriptors: Accountability, Public Sector, Participatory Research, Context Effect
Benjamin, Lehn M. – American Journal of Evaluation, 2012
Why do we continue to see evidence that nonprofit staff feel like outcome measurement is missing important aspects of their work? Based on an analysis of over 1,000 pages of material in 10 outcome measurement guides and a focused literature review of frontline work in three types of nonprofit organizations, this article shows that existing outcome…
Descriptors: Program Effectiveness, Nonprofit Organizations, Human Services, Community Development
Garcia, David R. – National Education Policy Center, 2013
In this report the Brown Center on Education Policy at the Brookings Institution presents the results of a self-developed Education Choice and Competition Index (ECCI) along with an interactive application that grades large school districts according to the ECCI. The index is composed of 13 pro choice criteria. The authors present the ECCI as a…
Descriptors: Evidence, Competition, School Choice, School District Size
Andru, Peter; Botchkarev, Alexei – Journal of MultiDisciplinary Evaluation, 2011
Background: Return on investment (ROI) is one of the most popular evaluation metrics. ROI analysis (when applied correctly) is a powerful tool of evaluating existing information systems and making informed decisions on the acquisitions. However, practical use of the ROI is complicated by a number of uncertainties and controversies. The article…
Descriptors: Outcomes of Education, Information Systems, School Business Officials, Evaluation Methods
Daigneault, Pierre-Marc; Jacob, Steve – American Journal of Evaluation, 2009
While participatory evaluation (PE) constitutes an important trend in the field of evaluation, its ontology has not been systematically analyzed. As a result, the concept of PE is ambiguous and inadequately theorized. Furthermore, no existing instrument accurately measures stakeholder participation. First, this article attempts to overcome these…
Descriptors: Concept Formation, Evaluation Methods, Participation, Test Construction
Rymer, Les – Group of Eight (NJ1), 2011
Current economic conditions and the increasing competition for government funding are leading to an increased focus on the impact of research. Measuring the impact of research is difficult because not all impacts are direct and some can be negative or result from the identification of problems that require a non-research response. The time between…
Descriptors: Foreign Countries, Program Effectiveness, Research Utilization, Measurement
McLeod, Bryce D.; Southam-Gerow, Michael A.; Weisz, John R. – School Psychology Review, 2009
This special series focused on treatment integrity in the child mental health and education field is timely. The articles do a laudable job of reviewing (a) the current status of treatment integrity research and measurement, (b) existing conceptual models of treatment integrity, and (c) the limitations of prior research. Overall, this thoughtful…
Descriptors: Evaluation Research, Children, Intervention, Research Methodology

Wholey, Joseph S. – New Directions for Program Evaluation, 1987
Evaluability assessment is a diagnostic and prescriptive technique that can be used to determine the extent to which different problems inhibit program evaluation. It involves policymakers, managers, and staff in developing program theory, clarifying intended uses of evaluation information, and planning further evaluations that would stimulate…
Descriptors: Evaluation Problems, Evaluation Utilization, Evaluators, Measurement Techniques
Schulte, Ann C.; Easton, Julia E.; Parker, Justin – School Psychology Review, 2009
Documenting treatment integrity is an important issue in research and practice in any discipline concerned with prevention and intervention. However, consensus concerning the dimensions of treatment integrity and how they should be measured has yet to emerge. Advances from three areas in which significant treatment integrity work has taken…
Descriptors: Substance Abuse, Prevention, Outcomes of Treatment, School Psychology
Gresham, Frank M. – School Psychology Review, 2009
The concept of treatment integrity cuts across a diversity of fields involved with providing treatments or interventions to individuals. In medical treatments, the concept of "treatment compliance" or "treatment adherence" is an important and problematic issue. In the field of nutrition, the concept of "dietary adherence" is important for…
Descriptors: Program Implementation, Psychometrics, Definitions, Intellectual Disciplines

Cozzens, Susan E. – Evaluation and Program Planning, 1997
Reviews the state of the art in performance measures and assessment processes in program evaluation before the passage of the Government Performance and Results Act (GPRA), and discusses difficulties in fitting these practices into the GPRA format. Presents a simple logic model for evaluating research programs. (SLD)
Descriptors: Evaluation Methods, Evaluation Problems, Knowledge Level, Measurement Techniques

Scriven, Michael – Studies in Educational Evaluation, 1994
This essay provides some background on, explanation of, and justification of a theoretical framework for all types of evaluation. This framework is used to connect and underpin the approaches of other contributors to this special issue. A core discipline of evaluation is proposed that ties assorted evaluation fields together. (SLD)
Descriptors: Educational Research, Elementary Secondary Education, Evaluation Methods, Evaluation Problems
Kong, Young Hee; Park, In-Jo; Jacobs, Ronald L. – Online Submission, 2006
Investments in web-based training (WBT) have increased presumably to attain cost savings and performance improvement. However, it has been difficult to assess the cost of WBT. In order to conduct an accurate cost analysis of WBT, three primary issues remain as follows: 1) identifying the missing cost components of WBT, 2) assessing the costs of…
Descriptors: Cost Effectiveness, Internet, Costs, Web Based Instruction

Conrad, Kendon J.; Miller, Todd Q. – New Directions for Program Evaluation, 1987
A short history of program theory in evaluation is reviewed. Two related problems in program evaluation are discussed and illustrated: (1) how to specify in measurable terms the consensus of theories and values that guides the program; and (2) how to construct a theoretical framework that specifies the use of these measurements. (Author/JAZ)
Descriptors: Evaluation Methods, Evaluation Problems, History, Measurement Techniques
Previous Page | Next Page ยป
Pages: 1 | 2