Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 10 |
Descriptor
Evaluation Problems | 23 |
Program Evaluation | 23 |
Research Design | 23 |
Evaluation Methods | 12 |
Program Effectiveness | 9 |
Research Methodology | 9 |
Evaluation Research | 6 |
Research Problems | 5 |
Academic Achievement | 4 |
Evidence | 4 |
Outcomes of Education | 4 |
More ▼ |
Source
Author
Baker, Eva L. | 1 |
Bamberger, Michael | 1 |
Bridget Terry Long | 1 |
Briefel, Ronette | 1 |
Chelimsky, Eleanor | 1 |
Clark, Sheldon B. | 1 |
Corson, Walter | 1 |
Davey, Tim L. | 1 |
Deterding, Nicole M. | 1 |
Devaney, Barbara | 1 |
Ewert, Alan | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Evaluative | 10 |
Reports - Descriptive | 8 |
Opinion Papers | 4 |
Guides - Non-Classroom | 3 |
Speeches/Meeting Papers | 3 |
Reports - Research | 2 |
Books | 1 |
Information Analyses | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Adult Education | 3 |
Elementary Secondary Education | 2 |
Elementary Education | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Policymakers | 1 |
Laws, Policies, & Programs
Child Care and Development… | 1 |
No Child Left Behind Act 2001 | 1 |
Stewart B McKinney Homeless… | 1 |
Temporary Assistance for… | 1 |
Assessments and Surveys
Florida Comprehensive… | 1 |
North Carolina End of Course… | 1 |
What Works Clearinghouse Rating
Does not meet standards | 1 |
Zandniapour, Lily; Deterding, Nicole M. – American Journal of Evaluation, 2018
Tiered evidence initiatives are an important federal strategy to incentivize and accelerate the use of rigorous evidence in planning, implementing, and assessing social service investments. The Social Innovation Fund (SIF), a program of the Corporation for National and Community Service, adopted a public-private partnership approach to tiered…
Descriptors: Program Effectiveness, Program Evaluation, Research Needs, Evidence
Stufflebeam, Daniel L. – Journal of MultiDisciplinary Evaluation, 2011
Good evaluation requires that evaluation efforts themselves be evaluated. Many things can and often do go wrong in evaluation work. Accordingly, it is necessary to check evaluations for problems such as bias, technical error, administrative difficulties, and misuse. Such checks are needed both to improve ongoing evaluation activities and to assess…
Descriptors: Program Evaluation, Evaluation Criteria, Evaluation Methods, Definitions
Ewert, Alan; Sibthorp, Jim – Journal of Experiential Education, 2009
There is an increasing interest in the field of experiential education to move beyond simply documenting the value of experiential education programs and, instead, develop more evidence-based models for experiential education practice (cf., Gass, 2005; Henderson, 2004). Due in part to the diversity of experiential education programs, participants,…
Descriptors: Outcomes of Education, Evidence, Models, Program Evaluation
Stockard, Jean – Current Issues in Education, 2010
A large body of literature documents the central importance of fidelity of program implementation in creating an internally valid research design and considering such fidelity in judgments of research quality. The What Works Clearinghouse (WWC) provides web-based summary ratings of educational innovations and is the only rating group that is…
Descriptors: Research Design, Educational Innovation, Program Implementation, Program Effectiveness
House, Ernest R. – American Journal of Evaluation, 2008
Drug studies are often cited as the best exemplars of evaluation design. However, many of these studies are seriously biased in favor of positive findings for the drugs evaluated, even to the point where dangerous effects are hidden. In spite of using randomized designs and double blinding, drug companies have found ways of producing the results…
Descriptors: Integrity, Evaluation Methods, Program Evaluation, Experimenter Characteristics
Xu, Zeyu; Nichols, Austin – National Center for Analysis of Longitudinal Data in Education Research, 2010
The gold standard in making causal inference on program effects is a randomized trial. Most randomization designs in education randomize classrooms or schools rather than individual students. Such "clustered randomization" designs have one principal drawback: They tend to have limited statistical power or precision. This study aims to…
Descriptors: Test Format, Reading Tests, Norm Referenced Tests, Research Design
Shi, Yan; Tsang, Mun C. – Educational Research Review, 2008
This is a critical review of methodological issues in the evaluation of adult literacy education programs in the United States. It addresses the key research questions: What are the appropriate methods for evaluating these programs under given circumstances. It identifies 15 evaluation studies that are representative of a range of adult literacy…
Descriptors: Program Effectiveness, Adult Literacy, Adult Education, Educational Research
Bamberger, Michael; White, Howard – Journal of MultiDisciplinary Evaluation, 2007
The purpose of this article is to extend the discussion of issues currently being debated on the need for more rigorous program evaluation in educational and other sectors of research, to the field of international development evaluation, reviewing the different approaches which can be adopted to rigorous evaluation methodology and their…
Descriptors: Program Evaluation, Evaluation Methods, Evaluation Research, Convergent Thinking

Glass, Norman – Children & Society, 2001
Describes recent developments in United Kingdom politics that have affected development of evidence-based policy for children. Examines the notion of "what works," leading to a suggestion that evaluators be concerned with "what is worth doing" for children. Considers robustness as a guiding principle for evaluation design and…
Descriptors: Evaluation Problems, Foreign Countries, Politics, Program Evaluation

Baker, Eva L. – 1984
This chapter addresses the problems encountered in the formative evaluation of instructional development projects and the instructional development process. Three types of formative evaluation--component, convergent, and contextual--are distinguished, and the consequences of using the wrong type of evaluation in a particular situation or project…
Descriptors: Data Collection, Evaluation Methods, Evaluation Problems, Evaluation Utilization
Fitz-Gibbon, Carol Taylor; Morris, Lynn Lyons – 1987
The "CSE Program Evaluation Kit" is a series of nine books intended to assist people conducting program evaluations. This volume, the third in the kit, discusses the logic underlying the use of quantitative research designs, including the pretest-posttest design, and supplies step-by-step procedures for setting up and interpreting the…
Descriptors: Analysis of Variance, Evaluation Methods, Evaluation Problems, Experiments

Newburn, Tim – Children & Society, 2001
Discusses the history of evaluation research, focusing on current emphasis on "evidence-based policy." Highlights five issues emerging as relevant: evaluation can be done in many ways, serious doubts exist about the usefulness of the term "evaluation," evaluation is held back by paradigm wars, views of evaluation are…
Descriptors: Definitions, Evaluation Methods, Evaluation Problems, Foreign Countries
Yun, John T. – Education and the Public Interest Center, 2008
A new report published by the Manhattan Institute for Education Policy, "The Effect of Special Education Vouchers on Public School Achievement: Evidence from Florida's McKay Scholarship Program," attempts to examine the complex issue of how competition introduced through school vouchers affects student outcomes in public schools. The…
Descriptors: Evidence, Research Design, Public Schools, Academic Achievement

Moskowitz, Joel M. – Evaluation and Program Planning, 1993
Why conclusions of many outcome evaluations do not stand up to scrutiny is discussed, drawing on examples from evaluations of drug abuse prevention programs. Factors that undermine these studies are largely the result of social-structural problems that influence the design and implementation of the research. (SLD)
Descriptors: Bias, Drug Abuse, Evaluation Problems, Institutional Characteristics
Shaul, Marnie S. – 2001
At the request of a Senate subcommittee, this report describes the value of conducting impact evaluations, describes their current use in evaluating selected early childhood education and care programs, and discusses the value of other types of early childhood education and care studies currently promoted and sponsored by the Departments of Health…
Descriptors: Day Care, Evaluation Problems, Outcomes of Education, Preschool Education
Previous Page | Next Page ยป
Pages: 1 | 2