NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)10
Audience
Policymakers1
What Works Clearinghouse Rating
Does not meet standards1
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zandniapour, Lily; Deterding, Nicole M. – American Journal of Evaluation, 2018
Tiered evidence initiatives are an important federal strategy to incentivize and accelerate the use of rigorous evidence in planning, implementing, and assessing social service investments. The Social Innovation Fund (SIF), a program of the Corporation for National and Community Service, adopted a public-private partnership approach to tiered…
Descriptors: Program Effectiveness, Program Evaluation, Research Needs, Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Stufflebeam, Daniel L. – Journal of MultiDisciplinary Evaluation, 2011
Good evaluation requires that evaluation efforts themselves be evaluated. Many things can and often do go wrong in evaluation work. Accordingly, it is necessary to check evaluations for problems such as bias, technical error, administrative difficulties, and misuse. Such checks are needed both to improve ongoing evaluation activities and to assess…
Descriptors: Program Evaluation, Evaluation Criteria, Evaluation Methods, Definitions
Peer reviewed Peer reviewed
Direct linkDirect link
Ewert, Alan; Sibthorp, Jim – Journal of Experiential Education, 2009
There is an increasing interest in the field of experiential education to move beyond simply documenting the value of experiential education programs and, instead, develop more evidence-based models for experiential education practice (cf., Gass, 2005; Henderson, 2004). Due in part to the diversity of experiential education programs, participants,…
Descriptors: Outcomes of Education, Evidence, Models, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Stockard, Jean – Current Issues in Education, 2010
A large body of literature documents the central importance of fidelity of program implementation in creating an internally valid research design and considering such fidelity in judgments of research quality. The What Works Clearinghouse (WWC) provides web-based summary ratings of educational innovations and is the only rating group that is…
Descriptors: Research Design, Educational Innovation, Program Implementation, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
House, Ernest R. – American Journal of Evaluation, 2008
Drug studies are often cited as the best exemplars of evaluation design. However, many of these studies are seriously biased in favor of positive findings for the drugs evaluated, even to the point where dangerous effects are hidden. In spite of using randomized designs and double blinding, drug companies have found ways of producing the results…
Descriptors: Integrity, Evaluation Methods, Program Evaluation, Experimenter Characteristics
Xu, Zeyu; Nichols, Austin – National Center for Analysis of Longitudinal Data in Education Research, 2010
The gold standard in making causal inference on program effects is a randomized trial. Most randomization designs in education randomize classrooms or schools rather than individual students. Such "clustered randomization" designs have one principal drawback: They tend to have limited statistical power or precision. This study aims to…
Descriptors: Test Format, Reading Tests, Norm Referenced Tests, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Shi, Yan; Tsang, Mun C. – Educational Research Review, 2008
This is a critical review of methodological issues in the evaluation of adult literacy education programs in the United States. It addresses the key research questions: What are the appropriate methods for evaluating these programs under given circumstances. It identifies 15 evaluation studies that are representative of a range of adult literacy…
Descriptors: Program Effectiveness, Adult Literacy, Adult Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Bamberger, Michael; White, Howard – Journal of MultiDisciplinary Evaluation, 2007
The purpose of this article is to extend the discussion of issues currently being debated on the need for more rigorous program evaluation in educational and other sectors of research, to the field of international development evaluation, reviewing the different approaches which can be adopted to rigorous evaluation methodology and their…
Descriptors: Program Evaluation, Evaluation Methods, Evaluation Research, Convergent Thinking
Peer reviewed Peer reviewed
Glass, Norman – Children & Society, 2001
Describes recent developments in United Kingdom politics that have affected development of evidence-based policy for children. Examines the notion of "what works," leading to a suggestion that evaluators be concerned with "what is worth doing" for children. Considers robustness as a guiding principle for evaluation design and…
Descriptors: Evaluation Problems, Foreign Countries, Politics, Program Evaluation
PDF pending restoration PDF pending restoration
Baker, Eva L. – 1984
This chapter addresses the problems encountered in the formative evaluation of instructional development projects and the instructional development process. Three types of formative evaluation--component, convergent, and contextual--are distinguished, and the consequences of using the wrong type of evaluation in a particular situation or project…
Descriptors: Data Collection, Evaluation Methods, Evaluation Problems, Evaluation Utilization
Fitz-Gibbon, Carol Taylor; Morris, Lynn Lyons – 1987
The "CSE Program Evaluation Kit" is a series of nine books intended to assist people conducting program evaluations. This volume, the third in the kit, discusses the logic underlying the use of quantitative research designs, including the pretest-posttest design, and supplies step-by-step procedures for setting up and interpreting the…
Descriptors: Analysis of Variance, Evaluation Methods, Evaluation Problems, Experiments
Peer reviewed Peer reviewed
Newburn, Tim – Children & Society, 2001
Discusses the history of evaluation research, focusing on current emphasis on "evidence-based policy." Highlights five issues emerging as relevant: evaluation can be done in many ways, serious doubts exist about the usefulness of the term "evaluation," evaluation is held back by paradigm wars, views of evaluation are…
Descriptors: Definitions, Evaluation Methods, Evaluation Problems, Foreign Countries
Yun, John T. – Education and the Public Interest Center, 2008
A new report published by the Manhattan Institute for Education Policy, "The Effect of Special Education Vouchers on Public School Achievement: Evidence from Florida's McKay Scholarship Program," attempts to examine the complex issue of how competition introduced through school vouchers affects student outcomes in public schools. The…
Descriptors: Evidence, Research Design, Public Schools, Academic Achievement
Peer reviewed Peer reviewed
Moskowitz, Joel M. – Evaluation and Program Planning, 1993
Why conclusions of many outcome evaluations do not stand up to scrutiny is discussed, drawing on examples from evaluations of drug abuse prevention programs. Factors that undermine these studies are largely the result of social-structural problems that influence the design and implementation of the research. (SLD)
Descriptors: Bias, Drug Abuse, Evaluation Problems, Institutional Characteristics
Shaul, Marnie S. – 2001
At the request of a Senate subcommittee, this report describes the value of conducting impact evaluations, describes their current use in evaluating selected early childhood education and care programs, and discusses the value of other types of early childhood education and care studies currently promoted and sponsored by the Departments of Health…
Descriptors: Day Care, Evaluation Problems, Outcomes of Education, Preschool Education
Previous Page | Next Page ยป
Pages: 1  |  2