NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 136 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stephen Gorard – Review of Education, 2024
This paper describes, and lays out an argument for, the use of a procedure to help groups of reviewers to judge the quality of prior research reports. It argues why such a procedure is needed, and how other existing approaches are only relevant to some kinds of research, meaning that a review or synthesis cannot successfully combine quality…
Descriptors: Credibility, Research Reports, Evaluation Methods, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Quan Nha; Fàbregues, Sergi; Bartlett, Gillian; Boardman, Felicity; Cargo, Margaret; Dagenais, Pierre; Gagnon, Marie-Pierre; Griffiths, Frances; Nicolau, Belinda; O'Cathain, Alicia; Rousseau, Marie-Claude; Vedel, Isabelle; Pluye, Pierre – Education for Information, 2018
Introduction: Appraising the quality of studies included in systematic reviews combining qualitative and quantitative evidence is challenging. To address this challenge, a critical appraisal tool was developed: the Mixed Methods Appraisal Tool (MMAT). The aim of this paper is to present the enhancements made to the MMAT. Development: The MMAT was…
Descriptors: Mixed Methods Research, Evaluation Methods, Research Design, Evaluation Criteria
Hughes, Katherine L.; Miller, Trey; Reese, Kelly – Grantee Submission, 2021
This report from the Career and Technical Education (CTE) Research Network Lead team provides final results from an evaluability assessment of CTE programs that feasibly could be evaluated using a rigorous experimental design. Evaluability assessments (also called feasibility studies) are used in education and other fields, such as international…
Descriptors: Program Evaluation, Vocational Education, Evaluation Methods, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Holbrook, Allyson; Dally, Kerry; Avery, Carol; Lovat, Terry; Fairbairn, Hedy – Journal of Academic Ethics, 2017
There is an expectation that all researchers will act ethically and responsibly in the conduct of research involving humans and animals. While research ethics is mentioned in quality indicators and codes of responsible researcher conduct, it appears to have little profile in doctoral assessment. There seems to be an implicit assumption that…
Descriptors: Graduate Students, Doctoral Degrees, Doctoral Dissertations, Student Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wendt, Oliver; Miller, Bridget – Education and Treatment of Children, 2012
Critical appraisal of the research literature is an essential step in informing and implementing evidence-based practice. Quality appraisal tools that assess the methodological quality of experimental studies provide a means to identify the most rigorous research suitable for evidence-based decision-making. In single-subject experimental research,…
Descriptors: Research, Evidence, Research Design, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Braverman, Marc T. – American Journal of Evaluation, 2013
Sound evaluation planning requires numerous decisions about how constructs in a program theory will be translated into measures and instruments that produce evaluation data. This article, the first in a dialogue exchange, examines how decisions about measurement are (and should be) made, especially in the context of small-scale local program…
Descriptors: Evaluation Methods, Methods Research, Research Methodology, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Fryer, Marilyn – Creativity Research Journal, 2012
This article explores a number of key issues with regard to the measurement of creativity in the course of conducting psychological research or when applying various evaluation measures. It is argued that, although creativity is a fuzzy concept, it is no more difficult to investigate than other fuzzy concepts people tend to take for granted. At…
Descriptors: Creativity, Educational Research, Psychological Studies, Evaluation Methods
Monahan, Shannon; Kratochwill, Thomas; Lipscomb, Stephen – Society for Research on Educational Effectiveness, 2011
The What Works Clearinghouse (WWC) seeks to provide educators, policymakers, researchers, and the public with a central and trusted source of scientific evidence for what works in education. The WWC was established in 2002 by the U.S. Department of Education's Institute of Education Sciences (IES). It serves as a decision-making resource by…
Descriptors: Expertise, Evidence, Educational Research, Clearinghouses
Peer reviewed Peer reviewed
Direct linkDirect link
Goldstein, Howard; Lackey, Kimberly C.; Schneider, Naomi J. B. – Exceptional Children, 2014
This review presents a novel framework for evaluating evidence based on a set of parallel criteria that can be applied to both group and single-subject experimental design (SSED) studies. The authors illustrate use of this evaluation system in a systematic review of 67 articles investigating social skills interventions for preschoolers with autism…
Descriptors: Preschool Education, Preschool Children, Intervention, Autism
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bustos, Antonio; Arostegui, Jose Luis – Quality of Higher Education, 2012
Universities in Europe have been playing an increasingly important role in the institutional evaluation of political and social systems for the last thirty years. Their major contribution to those processes of accountability has been to add methods and safeguards of evaluative research. In this paper we report an illustration of how evaluative…
Descriptors: Research Administration, Evaluation Criteria, Evaluation Methods, Social Services
Peer reviewed Peer reviewed
Direct linkDirect link
Sanders, James R.; Nafziger, Dean N. – Journal of MultiDisciplinary Evaluation, 2011
The purpose of this paper is to provide a basis for judging the adequacy of evaluation plans or, as they are commonly called, evaluation designs. The authors assume that using the procedures suggested in this paper to determine the adequacy of evaluation designs in advance of actually conducting evaluations will lead to better evaluation designs,…
Descriptors: Check Lists, Program Evaluation, Research Design, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Reichardt, Charles S. – American Journal of Evaluation, 2011
I define a treatment effect in terms of a comparison of outcomes and provide a typology of all possible comparisons that can be used to estimate treatment effects, including comparisons that are relatively unknown in both the literature and practice. I then assess the relative merit, worth, and value of all possible comparisons based on the…
Descriptors: Program Effectiveness, Evaluation Methods, Evaluation Criteria, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ross, Margaret E.; Narayanan, N. Hari; Hendrix, Theron Dean; Myneni, Lakshman Sundeep – Journal of MultiDisciplinary Evaluation, 2011
Background: The philosophical underpinnings of evaluation guidelines set forth by a funding agency can sometimes seem inconsistent with that of the intervention. Purpose: Our purpose is to introduce questions pertaining to the contrast between the instructional program's underlying philosophical beliefs and assumptions and those underlying our…
Descriptors: Philanthropic Foundations, Grants, Financial Support, Computer Science Education
Peer reviewed Peer reviewed
Direct linkDirect link
Stufflebeam, Daniel L. – Journal of MultiDisciplinary Evaluation, 2011
Good evaluation requires that evaluation efforts themselves be evaluated. Many things can and often do go wrong in evaluation work. Accordingly, it is necessary to check evaluations for problems such as bias, technical error, administrative difficulties, and misuse. Such checks are needed both to improve ongoing evaluation activities and to assess…
Descriptors: Program Evaluation, Evaluation Criteria, Evaluation Methods, Definitions
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10