NotesFAQContact Us
Collection
Advanced
Search Tips
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 71 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mark, Melvin M. – American Journal of Evaluation, 2022
Premised on the idea that evaluators should be familiar with a range of approaches to program modifications, I review several existing approaches and then describe another, less well-recognized option. In this newer option, evaluators work with others to identify potentially needed adaptations for select program aspects "in advance." In…
Descriptors: Evaluation Research, Evaluation Problems, Evaluation Methods, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ellington, Roni; Barajas, Clara B.; Drahota, Amy; Meghea, Cristian; Uphold, Heatherlun; Scott, Jamil B.; Lewis, E. Yvonne; Furr-Holden, C. Debra – American Journal of Evaluation, 2022
Over the last few decades, there has been an increase in the number of large federally funded transdisciplinary programs and initiatives. Scholars have identified a need to develop frameworks, methodologies, and tools to evaluate the effectiveness of these large collaborative initiatives, providing precise ways to understand and assess the…
Descriptors: Evaluation Research, Evaluation Problems, Evaluation Methods, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Educational Researcher, 2015
The purpose of this statement is to inform those using or considering the use of value-added models (VAM) about their scientific and technical limitations in the evaluation of educators and programs that prepare teachers. The statement briefly reviews the background and current context of using VAM for evaluations, enumerates specific psychometric…
Descriptors: Value Added Models, Teacher Evaluation, Program Evaluation, Teacher Education Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Rog, Debra J. – American Journal of Evaluation, 2015
This article illustrates the synergistic role between practice and theory in evaluation. Using reflective practice, the author reviews her own work as well as the work of other evaluators to illustrate how theory can influence practice and, in turn, how evaluation practice can inform and grow theory, especially evaluation theory. The following…
Descriptors: Theory Practice Relationship, Evaluation Problems, Program Evaluation, Social Science Research
Peer reviewed Peer reviewed
Direct linkDirect link
Chouinard, Jill Anne – American Journal of Evaluation, 2013
Evaluation occurs within a specific context and is influenced by the economic, political, historical, and social forces that shape that context. The culture of evaluation is thus very much embedded in the culture of accountability that currently prevails in public sector institutions, policies, and program. As such, our understanding of the…
Descriptors: Accountability, Public Sector, Participatory Research, Context Effect
Peer reviewed Peer reviewed
Direct linkDirect link
Kirkhart, Karen E. – New Directions for Evaluation, 2011
Understanding the influence of multisite evaluation requires careful consideration of cultural context. The author illustrates dimensions of influence and culture with excerpts from four National Science Foundation evaluation case studies and summarizes what influence teaches everyone about culture and what culture teaches everyone about…
Descriptors: Evaluation Utilization, Cultural Context, Evaluation Research, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Yates, Brian T. – New Directions for Evaluation, 2012
The value of a program can be understood as referring not only to outcomes, but also to how those outcomes compare to the types and amounts of resources expended to produce the outcomes. Major potential mistakes and biases in assessing the worth of resources consumed, as well as the value of outcomes produced, are explored. Most of these occur…
Descriptors: Program Evaluation, Cost Effectiveness, Evaluation Criteria, Evaluation Problems
Center for Mental Health in Schools at UCLA, 2011
This brief (1) defines indicators, (2) places the concept into the context of the various ways indicators can be used in education, (3) explores some specific considerations and concerns that arise in evaluating results, (4) offers a categorization and examples of short-term outcome indicators for school use, and (5) stresses the need for policy…
Descriptors: Educational Policy, Educational Indicators, Evaluation Methods, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Miner, Jeremy T. – Research Management Review, 2011
After months of waiting, the grant reviews came back: "excellent," "excellent," and "fair." What?! How can this be? Why is the third review so out of line with the first two? On more than one occasion a principal investigator (PI) has been frustrated not only by a negative funding decision but more so by the accompanying reviewer evaluation forms…
Descriptors: Research Administration, Grants, Feedback (Response), Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Scriven, Michael – Journal of MultiDisciplinary Evaluation, 2011
In this paper, the author considers certain aspects of the problem of obtaining unbiased information about the merits of a program or product, whether for purposes of decision making or for accountability. The evaluation of personnel, as well as the evaluation of proposals and evaluations, generally involves a different set of problems than those…
Descriptors: Program Evaluation, Evaluation Methods, Test Bias, Personnel Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Daigneault, Pierre-Marc; Jacob, Steve – American Journal of Evaluation, 2009
While participatory evaluation (PE) constitutes an important trend in the field of evaluation, its ontology has not been systematically analyzed. As a result, the concept of PE is ambiguous and inadequately theorized. Furthermore, no existing instrument accurately measures stakeholder participation. First, this article attempts to overcome these…
Descriptors: Concept Formation, Evaluation Methods, Participation, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Stufflebeam, Daniel L. – Journal of MultiDisciplinary Evaluation, 2011
Good evaluation requires that evaluation efforts themselves be evaluated. Many things can and often do go wrong in evaluation work. Accordingly, it is necessary to check evaluations for problems such as bias, technical error, administrative difficulties, and misuse. Such checks are needed both to improve ongoing evaluation activities and to assess…
Descriptors: Program Evaluation, Evaluation Criteria, Evaluation Methods, Definitions
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Coyle, James P. – Collected Essays on Learning and Teaching, 2011
Evaluating higher education degree programs is an arduous task. This paper suggests innovative strategies for addressing four types of challenges that commonly occur during program evaluation: identifying theoretical models for evaluation, balancing potentially conflicting standards, accommodating faculty differences, and aligning courses.…
Descriptors: Undergraduate Study, College Programs, Program Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Ewert, Alan; Sibthorp, Jim – Journal of Experiential Education, 2009
There is an increasing interest in the field of experiential education to move beyond simply documenting the value of experiential education programs and, instead, develop more evidence-based models for experiential education practice (cf., Gass, 2005; Henderson, 2004). Due in part to the diversity of experiential education programs, participants,…
Descriptors: Outcomes of Education, Evidence, Models, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Stockard, Jean – Current Issues in Education, 2010
A large body of literature documents the central importance of fidelity of program implementation in creating an internally valid research design and considering such fidelity in judgments of research quality. The What Works Clearinghouse (WWC) provides web-based summary ratings of educational innovations and is the only rating group that is…
Descriptors: Research Design, Educational Innovation, Program Implementation, Program Effectiveness
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5