NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Location
Wisconsin1
Laws, Policies, & Programs
No Child Left Behind Act 20011
Assessments and Surveys
National Assessment of…1
What Works Clearinghouse Rating
Showing 1 to 15 of 29 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Alexander D. Latham; David A. Klingbeil – Grantee Submission, 2024
The visual analysis of data presented in time-series graphs are common in single-case design (SCD) research and applied practice in school psychology. A growing body of research suggests that visual analysts' ratings are often influenced by construct-irrelevant features including Y-axis truncation and compression of the number of data points per…
Descriptors: Intervention, School Psychologists, Graphs, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2020
This supplement concerns Appendix E of the "What Works Clearinghouse (WWC) Procedures Handbook, Version 4.1." The supplement extends the range of designs and analyses that can generate effect size and standard error estimates for the WWC. This supplement presents several new standard error formulas for cluster-level assignment studies,…
Descriptors: Educational Research, Evaluation Methods, Effect Size, Research Design
Lydia Bradford – ProQuest LLC, 2024
In randomized control trials (RCT), the recent focus has shifted to how an intervention yields positive results on its intended outcome. This aligns with the recent push of implementation science in healthcare (Bauer et al., 2015) but goes beyond this. RCTs have moved to evaluating the theoretical framing of the intervention as well as differing…
Descriptors: Hierarchical Linear Modeling, Mediation Theory, Randomized Controlled Trials, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Radley, Keith C.; Dart, Evan H.; Wright, Sarah J. – School Psychology Quarterly, 2018
Research based on single-case designs (SCD) are frequently utilized in educational settings to evaluate the effect of an intervention on student behavior. Visual analysis is the primary method of evaluation of SCD, despite research noting concerns regarding reliability of the procedure. Recent research suggests that characteristics of the graphic…
Descriptors: Graphs, Evaluation Methods, Data, Intervention
Zimmerman, Kathleen N.; Pustejovsky, James E.; Ledford, Jennifer R.; Barton, Erin E.; Severini, Katherine E.; Lloyd, Blair P. – Grantee Submission, 2018
Varying methods for evaluating the outcomes of single case research designs (SCD) are currently used in reviews and meta-analyses of interventions. Quantitative effect size measures are often presented alongside visual analysis conclusions. Six measures across two classes--overlap measures (percentage non-overlapping data, improvement rate…
Descriptors: Research Design, Evaluation Methods, Synthesis, Intervention
Westlund, Erik; Stuart, Elizabeth A. – American Journal of Evaluation, 2017
This article discusses the nonuse, misuse, and proper use of pilot studies in experimental evaluation research. The authors first show that there is little theoretical, practical, or empirical guidance available to researchers who seek to incorporate pilot studies into experimental evaluation research designs. The authors then discuss how pilot…
Descriptors: Use Studies, Pilot Projects, Evaluation Research, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Heyvaert, Mieke; Wendt, Oliver; Van den Noortgate, Wim; Onghena, Patrick – Journal of Special Education, 2015
Reporting standards and critical appraisal tools serve as beacons for researchers, reviewers, and research consumers. Parallel to existing guidelines for researchers to report and evaluate group-comparison studies, single-case experimental (SCE) researchers are in need of guidelines for reporting and evaluating SCE studies. A systematic search was…
Descriptors: Standards, Research Methodology, Comparative Analysis, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Robertson, Clare; Ramsay, Craig; Gurung, Tara; Mowatt, Graham; Pickard, Robert; Sharma, Pawana – Research Synthesis Methods, 2014
We describe our experience of using a modified version of the Cochrane risk of bias (RoB) tool for randomised and non-randomised comparative studies. Objectives: (1) To assess time to complete RoB assessment; (2) To assess inter-rater agreement; and (3) To explore the association between RoB and treatment effect size. Methods: Cochrane risk of…
Descriptors: Risk, Randomized Controlled Trials, Research Design, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hedges, Larry V.; Pustejovsky, James E.; Shadish, William R. – Research Synthesis Methods, 2013
Single-case designs are a class of research methods for evaluating treatment effects by measuring outcomes repeatedly over time while systematically introducing different condition (e.g., treatment and control) to the same individual. The designs are used across fields such as behavior analysis, clinical psychology, special education, and…
Descriptors: Effect Size, Research Design, Research Methodology, Behavioral Science Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Citkowicz, Martyna; Hedges, Larry V. – Society for Research on Educational Effectiveness, 2013
In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…
Descriptors: Multivariate Analysis, Effect Size, Sampling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Losinski, Mickey; Maag, John W.; Katsiyannis, Antonis; Ennis, Robin Parks – Exceptional Children, 2014
Interventions based on the results of functional behavioral assessment (FBA) have been the topic of extensive research and, in certain cases, mandated for students with disabilities under the Individuals With Disabilities Education Act. There exist a wide variety of methods for conducting such assessments, with little consensus in the field. The…
Descriptors: Intervention, Predictor Variables, Program Effectiveness, Educational Quality
Peer reviewed Peer reviewed
Direct linkDirect link
Goldstein, Howard; Lackey, Kimberly C.; Schneider, Naomi J. B. – Exceptional Children, 2014
This review presents a novel framework for evaluating evidence based on a set of parallel criteria that can be applied to both group and single-subject experimental design (SSED) studies. The authors illustrate use of this evaluation system in a systematic review of 67 articles investigating social skills interventions for preschoolers with autism…
Descriptors: Preschool Education, Preschool Children, Intervention, Autism
Peer reviewed Peer reviewed
Direct linkDirect link
Wong, Manyee; Cook, Thomas D.; Steiner, Peter M. – Journal of Research on Educational Effectiveness, 2015
Some form of a short interrupted time series (ITS) is often used to evaluate state and national programs. An ITS design with a single treatment group assumes that the pretest functional form can be validly estimated and extrapolated into the postintervention period where it provides a valid counterfactual. This assumption is problematic. Ambiguous…
Descriptors: Evaluation Methods, Time, Federal Legislation, Educational Legislation
Peer reviewed Peer reviewed
Direct linkDirect link
Manolov, Rumen; Solanas, Antonio; Bulte, Isis; Onghena, Patrick – Journal of Experimental Education, 2010
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. To obtain information about each possible data division, the authors carried out a conditional Monte Carlo simulation with 100,000 samples for each…
Descriptors: Monte Carlo Methods, Effect Size, Simulation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Feingold, Alan – Psychological Methods, 2009
The use of growth-modeling analysis (GMA)--including hierarchical linear models, latent growth models, and general estimating equations--to evaluate interventions in psychology, psychiatry, and prevention science has grown rapidly over the last decade. However, an effect size associated with the difference between the trajectories of the…
Descriptors: Control Groups, Effect Size, Raw Scores, Models
Previous Page | Next Page ยป
Pages: 1  |  2