Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 15 |
Descriptor
Source
Journal of MultiDisciplinary… | 15 |
Author
Publication Type
Journal Articles | 15 |
Reports - Descriptive | 8 |
Reports - Evaluative | 6 |
Reports - Research | 1 |
Education Level
Adult Education | 5 |
Higher Education | 2 |
Postsecondary Education | 1 |
Audience
Location
Germany | 3 |
United States | 2 |
Alaska | 1 |
Australia | 1 |
Belgium | 1 |
Czech Republic | 1 |
District of Columbia | 1 |
Europe | 1 |
Finland | 1 |
France | 1 |
Hong Kong | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Scriven, Michael – Journal of MultiDisciplinary Evaluation, 2011
In this paper, the author considers certain aspects of the problem of obtaining unbiased information about the merits of a program or product, whether for purposes of decision making or for accountability. The evaluation of personnel, as well as the evaluation of proposals and evaluations, generally involves a different set of problems than those…
Descriptors: Program Evaluation, Evaluation Methods, Test Bias, Personnel Evaluation
Campbell, Donald T. – Journal of MultiDisciplinary Evaluation, 2011
It is a special characteristic of all modern societies that people consciously decide on and plan projects designed to improve their social systems. It is their universal predicament that their projects do not always have their intended effects. It seems inevitable that in most countries this common set of problems, combined with the obvious…
Descriptors: Social Planning, Social Systems, Social Change, Context Effect
Sanders, James R.; Nafziger, Dean N. – Journal of MultiDisciplinary Evaluation, 2011
The purpose of this paper is to provide a basis for judging the adequacy of evaluation plans or, as they are commonly called, evaluation designs. The authors assume that using the procedures suggested in this paper to determine the adequacy of evaluation designs in advance of actually conducting evaluations will lead to better evaluation designs,…
Descriptors: Check Lists, Program Evaluation, Research Design, Evaluation Methods
Stufflebeam, Daniel L. – Journal of MultiDisciplinary Evaluation, 2011
Good evaluation requires that evaluation efforts themselves be evaluated. Many things can and often do go wrong in evaluation work. Accordingly, it is necessary to check evaluations for problems such as bias, technical error, administrative difficulties, and misuse. Such checks are needed both to improve ongoing evaluation activities and to assess…
Descriptors: Program Evaluation, Evaluation Criteria, Evaluation Methods, Definitions
Cooksy, Leslie J.; Caracelli, Valerie J. – Journal of MultiDisciplinary Evaluation, 2009
This paper examines the practice of metaevaluation as indicated by the Metaevaluation standard of the Program Evaluation Standards, as the evaluation of a specific evaluation to inform stakeholders about the evaluation's strengths and weaknesses. The findings from an analysis of eighteen metaevaluations, including a description of the data…
Descriptors: Program Evaluation, Evaluation Criteria, Standards, Evaluation Research
Walser, Tamara M.; Bridges, Keith; Mattingly, Kate – Journal of MultiDisciplinary Evaluation, 2008
Charter Theatre is a small professional theatre in Washington, DC. Its mission is to develop and produce new plays. Like other organizations, Charter Theatre wants to be accountable. Its members saw early the need for evaluation--a repeatable process to assure the quality of their work, and have infused their development process with evaluation.…
Descriptors: Theater Arts, Evaluation Methods, Evaluation Research, Program Evaluation
Peck, Laura R.; Gorzalski, Lindsey M. – Journal of MultiDisciplinary Evaluation, 2009
Background: Research on evaluation use focuses on putting evaluation recommendations into practice. Prior theoretical research proposes varied frameworks for understanding the use (or lack) of program evaluation results. Purpose: Our purpose is to create and test a single, integrated framework for understanding evaluation use. This article relies…
Descriptors: Evaluation Research, Intervention, Program Evaluation, Content Analysis
Pacheco, Enrique Rebolloso; Fernandez-Ramirez, Baltasar; Andres, Pilar Canton – Journal of MultiDisciplinary Evaluation, 2009
The purposes of metaevaluation go beyond the traditional functions of accountability and enhancement. It helps guide strategic organizational change and legitimizes evaluation systems. Metaevaluation results can also be used to create checklists so that the persons responsible for any evaluation can revise, monitor, and control them by themselves.…
Descriptors: Higher Education, Organizational Change, Program Evaluation, Evaluation Criteria
Stegmann, Tim – Journal of MultiDisciplinary Evaluation, 2009
The Institute for Work, Skills and Training was assigned to evaluate a labor market program aimed at the integration of long-term unemployed individuals aged 50 or older. The integration should have been achieved not only by training and coaching of individuals, but also by building regional networks between labor market stakeholders within a…
Descriptors: Employment Programs, Labor Market, Foreign Countries, Counties
Lowenbein, Oded – Journal of MultiDisciplinary Evaluation, 2008
The United States has a long tradition in evaluation of political programs. In the 1930s and 1940s, programs were initiated to reduce unemployment and improve social security as part of the "New Deal." In the late 1960s, somewhat comparable to the U. S. at that time, Germany's new government started its own "New Deal."…
Descriptors: Supply and Demand, Foreign Countries, Federal Government, Educational Policy
Coryn, Chris L. S.; Scriven, Michael – Journal of MultiDisciplinary Evaluation, 2007
The evaluation of government-financed research has become increasingly important in the last few decades in terms of increasing the quality of, and payoff from, the research that is done, reducing the cost of doing it, and lending public credibility to the manner in which research is funded. But there are very large differences throughout the…
Descriptors: Program Effectiveness, Comparative Education, National Programs, Evaluation Research
Karimov, Afar; Borovykh, Alexander; Kuzmin, Alexey; Abdykadyrova, Asel; Efendiev, Djahangir; Greshnova, Ekaterina; Konovalova, Elena; Frants, Inessa; Palivoda, Liubov; Usifli, Seymour; Balakirev, Vladimir – Journal of MultiDisciplinary Evaluation, 2007
This paper provides a general overview of the development of program evaluation in CIS (Commonwealth of Independent States) countries. We start by telling a story that describes how evaluation appeared in the scene, how it developed and who the key players were in its development. We discuss the issue of demand for and supply of evaluation…
Descriptors: Program Evaluation, Foreign Countries, Regional Programs, Program Development
Bamberger, Michael; White, Howard – Journal of MultiDisciplinary Evaluation, 2007
The purpose of this article is to extend the discussion of issues currently being debated on the need for more rigorous program evaluation in educational and other sectors of research, to the field of international development evaluation, reviewing the different approaches which can be adopted to rigorous evaluation methodology and their…
Descriptors: Program Evaluation, Evaluation Methods, Evaluation Research, Convergent Thinking
Della-Piana, Connie Kubo; Della-Piana, Gabriel M. – Journal of MultiDisciplinary Evaluation, 2007
While the current debate in the evaluation community has concentrated on examining and explicating implications of the choice of methods for evaluating federal programs, the authors of this paper address the challenges faced by the government in the selection of funding mechanisms for supporting program evaluation efforts. The choice of funding…
Descriptors: Research and Development, Program Evaluation, Federal Programs, Evaluation Methods
Datta, Lois-ellin – Journal of MultiDisciplinary Evaluation, 2007
The Randomized Control Trials (RCT) design and its quasi-experimental kissing cousin, the Comparison Group Trials (CGT), are golden to some and not even silver to others. At the center of the affection, at the vortex of the discomfort, are beliefs about what it takes to establish causality. These designs are considered primarily when the purpose…
Descriptors: Experimental Groups, Preschool Education, National Programs, Disadvantaged Youth