Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 32 |
Descriptor
Comparative Analysis | 67 |
Evaluation Methods | 67 |
Research Design | 67 |
Program Evaluation | 26 |
Research Methodology | 23 |
Program Effectiveness | 16 |
Statistical Analysis | 14 |
Control Groups | 11 |
Models | 10 |
Validity | 9 |
Evaluation Criteria | 8 |
More ▼ |
Source
Author
Publication Type
Education Level
Location
Australia | 2 |
Illinois | 2 |
Louisiana | 2 |
Alabama | 1 |
Arizona | 1 |
Canada | 1 |
Florida | 1 |
Indiana | 1 |
Maryland | 1 |
Massachusetts | 1 |
Missouri (Saint Louis) | 1 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 2 |
Job Training Partnership Act… | 2 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
Comprehensive Tests of Basic… | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Ian Greener – International Journal of Social Research Methodology, 2024
This paper argues for three aspects of tolerance with respect to QCA research: tolerance with respect to different approaches to QCA; producing QCA research with tolerance (work that is resistant to criticism); and for QCA researchers to be clear about the tolerance of the solutions they present -- especially in terms of calibration and truth…
Descriptors: Qualitative Research, Research Methodology, Comparative Analysis, Research Design
Weidlich, Joshua; Gaševic, Dragan; Drachsler, Hendrik – Journal of Learning Analytics, 2022
As a research field geared toward understanding and improving learning, Learning Analytics (LA) must be able to provide empirical support for causal claims. However, as a highly applied field, tightly controlled randomized experiments are not always feasible nor desirable. Instead, researchers often rely on observational data, based on which they…
Descriptors: Causal Models, Inferences, Learning Analytics, Comparative Analysis
Victoria S. Fringer; Elijah R. Farley; Kimberly Mandery; Michael Badger; Charlee Johnson; Katarina Hanson; Madeline Zamzow; Zoe Armstrong; Lauren LeBourgeois; Tracy Bibelnieks; Jacob W. Wainman – Journal of Chemical Education, 2022
Inquiry-based laboratories were implemented into a General Chemistry Laboratory sequence, and the impact of these exercises on students' experimental design skills was assessed using a four-part assessment developed for this study. This assessment contained a multiple-choice section, a section asking students to explain their reasoning behind a…
Descriptors: Active Learning, Inquiry, Conventional Instruction, Comparative Analysis
Langan, Dean; Higgins, Julian P. T.; Jackson, Dan; Bowden, Jack; Veroniki, Areti Angeliki; Kontopantelis, Evangelos; Viechtbauer, Wolfgang; Simmonds, Mark – Research Synthesis Methods, 2019
Studies combined in a meta-analysis often have differences in their design and conduct that can lead to heterogeneous results. A random-effects model accounts for these differences in the underlying study effects, which includes a heterogeneity variance parameter. The DerSimonian-Laird method is often used to estimate the heterogeneity variance,…
Descriptors: Simulation, Meta Analysis, Health, Comparative Analysis
Wong, Vivian C.; Steiner, Peter M.; Anglin, Kylie L. – Grantee Submission, 2018
Given the widespread use of non-experimental (NE) methods for assessing program impacts, there is a strong need to know whether NE approaches yield causally valid results in field settings. In within-study comparison (WSC) designs, the researcher compares treatment effects from an NE with those obtained from a randomized experiment that shares the…
Descriptors: Evaluation Methods, Program Evaluation, Program Effectiveness, Comparative Analysis
Abutabenjeh, Sawsan; Jaradat, Raed – Teaching Public Administration, 2018
Research design is a critical topic that is central to research studies in science, social science, and many other disciplines. After identifying the research topic and formulating questions, selecting the appropriate design is perhaps the most important decision a researcher makes. Currently, there is a plethora of literature presenting multiple…
Descriptors: Research Design, Research Methodology, Comparative Analysis, Public Administration
Zimmerman, Kathleen N.; Ledford, Jennifer R.; Severini, Katherine E.; Pustejovsky, James E.; Barton, Erin E.; Lloyd, Blair P. – Grantee Submission, 2018
Tools for evaluating the quality and rigor of single case research designs (SCD) are often used when conducting SCD syntheses. Preferred components include evaluations of design features related to the internal validity of SCD to obtain quality and/or rigor ratings. Three tools for evaluating the quality and rigor of SCD (Council for Exceptional…
Descriptors: Research Design, Evaluation Methods, Synthesis, Validity
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Zimmerman, Kathleen N.; Pustejovsky, James E.; Ledford, Jennifer R.; Barton, Erin E.; Severini, Katherine E.; Lloyd, Blair P. – Grantee Submission, 2018
Varying methods for evaluating the outcomes of single case research designs (SCD) are currently used in reviews and meta-analyses of interventions. Quantitative effect size measures are often presented alongside visual analysis conclusions. Six measures across two classes--overlap measures (percentage non-overlapping data, improvement rate…
Descriptors: Research Design, Evaluation Methods, Synthesis, Intervention
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods
Heyvaert, Mieke; Wendt, Oliver; Van den Noortgate, Wim; Onghena, Patrick – Journal of Special Education, 2015
Reporting standards and critical appraisal tools serve as beacons for researchers, reviewers, and research consumers. Parallel to existing guidelines for researchers to report and evaluate group-comparison studies, single-case experimental (SCE) researchers are in need of guidelines for reporting and evaluating SCE studies. A systematic search was…
Descriptors: Standards, Research Methodology, Comparative Analysis, Experiments
Wendt, Oliver; Miller, Bridget – Education and Treatment of Children, 2012
Critical appraisal of the research literature is an essential step in informing and implementing evidence-based practice. Quality appraisal tools that assess the methodological quality of experimental studies provide a means to identify the most rigorous research suitable for evidence-based decision-making. In single-subject experimental research,…
Descriptors: Research, Evidence, Research Design, Evaluation Methods
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly – American Journal of Evaluation, 2014
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…
Descriptors: Time, Evaluation Methods, Measurement Techniques, Research Design
Robertson, Clare; Ramsay, Craig; Gurung, Tara; Mowatt, Graham; Pickard, Robert; Sharma, Pawana – Research Synthesis Methods, 2014
We describe our experience of using a modified version of the Cochrane risk of bias (RoB) tool for randomised and non-randomised comparative studies. Objectives: (1) To assess time to complete RoB assessment; (2) To assess inter-rater agreement; and (3) To explore the association between RoB and treatment effect size. Methods: Cochrane risk of…
Descriptors: Risk, Randomized Controlled Trials, Research Design, Comparative Analysis
Skinner, Christopher H.; McCleary, Daniel F.; Skolits, Gary L.; Poncy, Brian C.; Cates, Gary L. – Psychology in the Schools, 2013
The success of Response-to-Intervention (RTI) and similar models of service delivery is dependent on educators being able to apply effective and efficient remedial procedures. In the process of implementing problem-solving RTI models, school psychologists have an opportunity to contribute to and enhance the quality of our remedial-procedure…
Descriptors: Response to Intervention, Models, Problem Solving, School Psychologists