NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 562 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Akdere, Mesut; Jiang, Yeling; Lobo, Flavio Destri – European Journal of Training and Development, 2022
Purpose: As new technologies such as immersive and augmented platforms emerge, training approaches are also transforming. The virtual reality (VR) platform provides a completely immersive learning experience for simulated training. Despite increased prevalence of these technologies, the extent literature is lagging behind in terms of evaluating…
Descriptors: Training, Computer Simulation, Educational Technology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Geuke, Gemma G. M.; Maric, Marija; Miocevic, Milica; Wolters, Lidewij H.; de Haan, Else – New Directions for Child and Adolescent Development, 2019
The major aim of this manuscript is to bring together two important topics that have recently received much attention in child and adolescent research, albeit separately from each other: single-case experimental designs and statistical mediation analysis. Single-case experimental designs (SCEDs) are increasingly recognized as a valuable…
Descriptors: Children, Adolescents, Research, Case Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Reichardt, Charles S. – American Journal of Evaluation, 2022
Evaluators are often called upon to assess the effects of programs. To assess a program effect, evaluators need a clear understanding of how a program effect is defined. Arguably, the most widely used definition of a program effect is the counterfactual one. According to the counterfactual definition, a program effect is the difference between…
Descriptors: Program Evaluation, Definitions, Causal Models, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kenneth R. Jones; Eugenia P. Gwynn; Allison M. Teeter – Journal of Human Sciences & Extension, 2019
This article provides insight into how an adequate approach to selecting methods can establish credible and actionable evidence. The authors offer strategies to effectively support Extension professionals, including program developers and evaluators, in being more deliberate when selecting appropriate qualitative and quantitative methods. In…
Descriptors: Evaluation Methods, Credibility, Evidence, Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Knook, Jorie; Eory, Vera; Brander, Matthew; Moran, Dominic – Journal of Agricultural Education and Extension, 2018
Purpose: Participatory extension programmes are widely used to promote change in the agricultural sector, and an important question is how best to measure the effectiveness of such programmes after implementation. This study seeks to understand the current state of practice through a review of ex post evaluations of participatory extension…
Descriptors: Extension Education, Agricultural Occupations, Program Evaluation, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Bell, Stephen H.; Stapleton, David C.; Wood, Michelle; Gubits, Daniel – American Journal of Evaluation, 2023
A randomized experiment that measures the impact of a social policy in a sample of the population reveals whether the policy will work on average with universal application. An experiment that includes only the subset of the population that volunteers for the intervention generates narrower "proof-of-concept" evidence of whether the…
Descriptors: Public Policy, Policy Formation, Federal Programs, Social Services
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2018
Underlying all What Works Clearinghouse (WWC) products are WWC Study Review Guides, which are intended for use by WWC certified reviewers to assess studies against the WWC evidence standards. As part of an ongoing effort to increase transparency, promote collaboration, and encourage widespread use of the WWC standards, the Institute of Education…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Bergsmann, Evelyn; Klug, Julia; Burger, Christoph; Först, Nora; Spiel, Christiane – Assessment & Evaluation in Higher Education, 2018
There is a lively discussion on how to evaluate competence-based higher education in both evaluation and competence research. The instruments used are often limited to course evaluation or specific competences, taking a rather narrow perspective. Furthermore, the instruments often comprise predetermined competences that cannot be adapted to higher…
Descriptors: Questionnaires, Minimum Competency Testing, Screening Tests, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Bloom, Howard S.; Spybrook, Jessaca – Journal of Research on Educational Effectiveness, 2017
Multisite trials, which are being used with increasing frequency in education and evaluation research, provide an exciting opportunity for learning about how the effects of interventions or programs are distributed across sites. In particular, these studies can produce rigorous estimates of a cross-site mean effect of program assignment…
Descriptors: Program Effectiveness, Program Evaluation, Sample Size, Evaluation Research
Tang, Yun – ProQuest LLC, 2018
Propensity and prognostic score methods are two statistical techniques used to correct for the selection bias in nonexperimental studies. Recently, the joint use of propensity and prognostic scores (i.e., two-score methods) has been proposed to improve the performance of adjustments using propensity or prognostic scores alone for bias reduction.…
Descriptors: Statistical Analysis, Probability, Bias, Program Evaluation
Warner-Richter, Mallory; Lowe, Claire; Tout, Kathryn; Epstein, Dale; Li, Weilin – Child Trends, 2016
The Success By 6® (SB6) initiative is designed to support early care and education centers in improving and sustaining quality in Pennsylvania's Keystone STARS Quality Rating and Improvement System (QRIS). The SB6 evaluation report examines implementation and outcomes. The findings have implications for SB6 continous quality improvement process…
Descriptors: Success, Research Reports, Child Care Centers, Quality Assurance
Peer reviewed Peer reviewed
Direct linkDirect link
Munter, Charles; Cobb, Paul; Shekell, Calli – American Journal of Evaluation, 2016
We examined the extent to which mathematics program evaluations that have been conducted according to methodologically rigorous standards have attended to the theories underlying the programs being evaluated. Our analysis focused on the 37 reports of K-12 mathematics program evaluations in the last two decades that have met standards for inclusion…
Descriptors: Evaluation Research, Clearinghouses, Standards, Mathematics Education
Peer reviewed Peer reviewed
Direct linkDirect link
Lindsay M. Fallon; Emily R. DeFouw; Sadie C. Cathcart; Talia S. Berkman; Patrick Robinson-Link; Breda V. O'Keeffe; George Sugai – Journal of Behavioral Education, 2022
School discipline disproportionality has long been documented in educational research, primarily impacting Black/African American and non-White Hispanic/Latinx students. In response, federal policymakers have encouraged educators to change their disciplinary practice, emphasizing that more proactive support is critical to promoting students'…
Descriptors: Discipline, Student Behavior, Behavior Modification, Social Development
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2016
This document provides step-by-step instructions on how to complete the Study Review Guide (SRG, Version S3, V2) for single-case designs (SCDs). Reviewers will complete an SRG for every What Works Clearinghouse (WWC) review. A completed SRG should be a reviewer's independent assessment of the study, relative to the criteria specified in the review…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Kulik, James A.; Fletcher, J. D. – Review of Educational Research, 2016
This review describes a meta-analysis of findings from 50 controlled evaluations of intelligent computer tutoring systems. The median effect of intelligent tutoring in the 50 evaluations was to raise test scores 0.66 standard deviations over conventional levels, or from the 50th to the 75th percentile. However, the amount of improvement found in…
Descriptors: Intelligent Tutoring Systems, Meta Analysis, Computer Assisted Instruction, Statistical Analysis
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  38