Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 41 |
Descriptor
Evaluation Criteria | 261 |
Evaluation Methods | 261 |
Research Methodology | 261 |
Program Evaluation | 115 |
Research Design | 45 |
Data Collection | 44 |
Educational Research | 43 |
Models | 41 |
Educational Assessment | 38 |
Higher Education | 37 |
Research Problems | 36 |
More ▼ |
Source
Author
Publication Type
Education Level
Higher Education | 14 |
Adult Education | 10 |
Postsecondary Education | 8 |
Elementary Secondary Education | 7 |
Elementary Education | 3 |
Early Childhood Education | 1 |
Grade 1 | 1 |
High Schools | 1 |
Preschool Education | 1 |
Secondary Education | 1 |
Audience
Researchers | 28 |
Practitioners | 27 |
Administrators | 15 |
Policymakers | 4 |
Teachers | 4 |
Counselors | 1 |
Location
United States | 3 |
Africa | 2 |
Australia | 2 |
Germany | 2 |
United Kingdom | 2 |
United Kingdom (England) | 2 |
Utah | 2 |
Wisconsin | 2 |
Belgium | 1 |
Botswana | 1 |
California | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
California Test of Basic… | 1 |
Comprehensive Tests of Basic… | 1 |
Iowa Tests of Basic Skills | 1 |
Iowa Tests of Educational… | 1 |
National Longitudinal Survey… | 1 |
Program for International… | 1 |
Tests of Achievement and… | 1 |
What Works Clearinghouse Rating
Stephen Gorard – Review of Education, 2024
This paper describes, and lays out an argument for, the use of a procedure to help groups of reviewers to judge the quality of prior research reports. It argues why such a procedure is needed, and how other existing approaches are only relevant to some kinds of research, meaning that a review or synthesis cannot successfully combine quality…
Descriptors: Credibility, Research Reports, Evaluation Methods, Research Design
Lars König; Steffen Zitzmann; Tim Fütterer; Diego G. Campos; Ronny Scherer; Martin Hecht – Research Synthesis Methods, 2024
Several AI-aided screening tools have emerged to tackle the ever-expanding body of literature. These tools employ active learning, where algorithms sort abstracts based on human feedback. However, researchers using these tools face a crucial dilemma: When should they stop screening without knowing the proportion of relevant studies? Although…
Descriptors: Artificial Intelligence, Psychological Studies, Researchers, Screening Tests
Grit Laudel – Research Evaluation, 2024
Researchers' notions of research quality depend on their field of research. Previous studies have shown that field-specific assessment criteria exist but could explain neither why these specific criteria and not others exist, nor how criteria are used in specific assessment situations. To give initial answers to these questions, formal assessment…
Descriptors: Researchers, Experimenter Characteristics, Intellectual Disciplines, Quality Circles
Stolpe, Karin; Björklund, Lars; Lundström, Mats; Åström, Maria – Higher Education: The International Journal of Higher Education Research, 2021
Previous research shows a discrepancy between different teachers' assessment of student theses. This might be an even larger problem in the context of teacher education, since teacher trainers originate from different disciplines. This study aims to investigate how different assessors prioritise between criteria for assessment. Criteria were…
Descriptors: Student Evaluation, Theses, Evaluation Criteria, Evaluation Methods
Wignall, Alice; Kelly, C.; Grace, P. – Pastoral Care in Education, 2022
The prevalence of mental health and emotional well-being difficulties in children is increasing, and schools play a key role in addressing this. Whole-school approaches have been suggested as an effective way of supporting children's mental health and well-being; however, there appears to be no consistent approach to their evaluation, and…
Descriptors: School Activities, Mental Health Programs, Evaluation Criteria, Evaluation Methods
Kenneth R. Jones; Eugenia P. Gwynn; Allison M. Teeter – Journal of Human Sciences & Extension, 2019
This article provides insight into how an adequate approach to selecting methods can establish credible and actionable evidence. The authors offer strategies to effectively support Extension professionals, including program developers and evaluators, in being more deliberate when selecting appropriate qualitative and quantitative methods. In…
Descriptors: Evaluation Methods, Credibility, Evidence, Evaluation Criteria
Chase, Manisha Kaur – Research & Practice in Assessment, 2020
Traditional classroom assessment practice often leaves students out of the conversation, exacerbating the unequal power distribution in the classroom. Viewing classrooms as autonomy-inhibiting is known to influence students' psychosocial wellbeing as well as their academic achievement. This is especially relevant in STEM fields where marginalized…
Descriptors: STEM Education, Pilot Projects, Intervention, Pretests Posttests
Piggot-Irvine, Eileen; Rowe, Wendy; Ferkins, Lesley – Educational Action Research, 2015
The focus of this paper is to share thinking about meta-level evaluation of action research (AR), and to introduce indicator domains for assessing and measuring inputs, outputs and outcomes. Meta-level and multi-site evaluation has been rare in AR beyond project implementation and participant satisfaction. The paper is the first of several…
Descriptors: Action Research, Educational Indicators, Program Evaluation, Evaluation Methods
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods
Pane, John F.; Baird, Matthew – RAND Corporation, 2014
The purpose of this document is to describe the methods RAND used to analyze achievement for 23 personalized learning (PL) schools for the 2012-13 through 2013-14 academic years. This work was performed at the request of the Bill & Melinda Gates Foundation (BMGF), as part of a multi-year evaluation contract. The 23 schools were selected from a…
Descriptors: Individualized Instruction, Outcome Measures, Academic Achievement, Achievement Gains
Koehn, Peter H.; Uitto, Juha I. – Higher Education: The International Journal of Higher Education and Educational Planning, 2014
Since the mid 1970s, a series of international declarations that recognize the critical link between environmental sustainability and higher education have been endorsed and signed by universities around the world. While academic initiatives in sustainability are blossoming, higher education lacks a comprehensive evaluation framework that is…
Descriptors: Sustainability, Program Evaluation, Curriculum Evaluation, Educational Research
Braverman, Marc T. – American Journal of Evaluation, 2013
Sound evaluation planning requires numerous decisions about how constructs in a program theory will be translated into measures and instruments that produce evaluation data. This article, the first in a dialogue exchange, examines how decisions about measurement are (and should be) made, especially in the context of small-scale local program…
Descriptors: Evaluation Methods, Methods Research, Research Methodology, Research Design
Storberg-Walker, Julia – Human Resource Development Review, 2012
This "Instructor's Corner" describes a step forward on the journey to write, review, and publish high-quality qualitative research manuscripts. This article examines two existing perspectives on generating high-quality qualitative manuscripts and then compares and contrasts the different elements of each. First, an overview of Rocco's (2010) eight…
Descriptors: Qualitative Research, Research Methodology, Faculty Publishing, Writing for Publication
Munter, Charles; Wilhelm, Anne Garrison; Cobb, Paul; Cordray, David S. – Journal of Research on Educational Effectiveness, 2014
This article draws on previously employed methods for conducting fidelity studies and applies them to an evaluation of an unprescribed intervention. We document the process of assessing the fidelity of implementation of the Math Recovery first-grade tutoring program, an unprescribed, diagnostic intervention. We describe how we drew on recent…
Descriptors: Intervention, Program Implementation, Mathematics Education, Educational Diagnosis
Rhodes, William – Evaluation Review, 2012
Research synthesis of evaluation findings is a multistep process. An investigator identifies a research question, acquires the relevant literature, codes findings from that literature, and analyzes the coded data to estimate the average treatment effect and its distribution in a population of interest. The process of estimating the average…
Descriptors: Social Sciences, Regression (Statistics), Meta Analysis, Models