Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 8 |
Descriptor
Control Groups | 11 |
Intervention | 11 |
Experimental Groups | 6 |
Educational Research | 4 |
Research Design | 4 |
Research Methodology | 4 |
Effect Size | 3 |
Pretests Posttests | 3 |
Researchers | 3 |
Scores | 3 |
Academic Achievement | 2 |
More ▼ |
Source
Author
Publication Type
Reports - Descriptive | 7 |
Journal Articles | 5 |
Reports - Research | 2 |
Guides - Non-Classroom | 1 |
Information Analyses | 1 |
Opinion Papers | 1 |
Education Level
Elementary Secondary Education | 2 |
Early Childhood Education | 1 |
Higher Education | 1 |
Audience
Researchers | 11 |
Practitioners | 2 |
Policymakers | 1 |
Teachers | 1 |
Location
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
Vineland Adaptive Behavior… | 1 |
What Works Clearinghouse Rating
Tincani, Matt; Travers, Jason – Remedial and Special Education, 2018
Demonstration of experimental control is considered a hallmark of high-quality single-case research design (SCRD). Studies that fail to demonstrate experimental control may not be published because researchers are unwilling to submit these papers for publication and journals are unlikely to publish negative results (i.e., the file drawer effect).…
Descriptors: Research Design, Intervention, Special Education, Experimental Groups
What Works Clearinghouse, 2015
The What Works Clearinghouse (WWC) Standards Briefs explain the rules the WWC uses to evaluate the quality of studies for practitioners, researchers, and policymakers. This brief explains what baseline equivalence is and why it matters. As part of the WWC review process for certain types of studies, reviewers assess whether the intervention group…
Descriptors: Control Groups, Participant Characteristics, Matched Groups, Research Methodology
Eddy, Sarah L. – CBE - Life Sciences Education, 2018
The "Current Insights" feature is designed to introduce life science educators and researchers to current articles of interest in other social science and education journals. In this installment, I highlight three diverse research studies out of psychology journals that address student study strategies, faculty change, and the influence…
Descriptors: Educational Research, Science Instruction, Periodicals, Journal Articles
Lipsey, Mark W.; Puzio, Kelly; Yun, Cathy; Hebert, Michael A.; Steinka-Fry, Kasia; Cole, Mikel W.; Roberts, Megan; Anthony, Karen S.; Busick, Matthew D. – National Center for Special Education Research, 2012
This paper is directed to researchers who conduct and report education intervention studies. Its purpose is to stimulate and guide them to go a step beyond reporting the statistics that emerge from their analysis of the differences between experimental groups on the respective outcome variables. With what is often very minimal additional effort,…
Descriptors: Intervention, Experimental Groups, Statistical Significance, Researchers
Dunst, Carl J.; Hamby, Deborah W. – Journal of Intellectual & Developmental Disability, 2012
This paper includes a nontechnical description of methods for calculating effect sizes in intellectual and developmental disability studies. Different hypothetical studies are used to illustrate how null hypothesis significance testing (NHST) and effect size findings can result in quite different outcomes and therefore conflicting results. Whereas…
Descriptors: Intervals, Developmental Disabilities, Statistical Significance, Effect Size
Deke, John; Constantine, Jill – Society for Research on Educational Effectiveness, 2011
Regression discontinuity designs (RDDs) are considered to be one of the strongest nonexperimental designs available for the purpose of identifying the effects of an intervention. RDD can be used in situations in which assignment to a treatment group is based on a cutoff value on a continuous assignment variable. The impact of the intervention is…
Descriptors: Educational Research, Intervention, Context Effect, Identification
Schochet, Peter Z.; Puma, Mike; Deke, John – National Center for Education Evaluation and Regional Assistance, 2014
This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…
Descriptors: Statistical Analysis, Evaluation Methods, Educational Research, Intervention
Grossman, Jean Baldwin – Public/Private Ventures, 2009
This methodological brief is designed to provide both program operators and researchers with practical advice about how to assess a program's implementation and impact. Adapted from an article that first appeared in "The Handbook of Youth Mentoring" (DuBois and Karcher, ed. 2005), the brief focuses on the evaluation of mentoring programs, but the…
Descriptors: Mentors, Evaluation, Program Implementation, Intervention
Dempster, Harriet L.; Roberts, Jacquie – Child Abuse and Neglect: The International Journal, 1991
An evaluation of one therapeutic service offered to sexually abused children in the United Kingdom exposed the methodological issue of false negatives among the comparison group, suggesting that researchers cannot assume that nonreferred comparison subjects have not actually experienced sexual abuse. (DB)
Descriptors: Child Abuse, Control Groups, Foreign Countries, Intervention
Hodapp, Robert M.; Dykens, Elisabeth M. – American Journal on Mental Retardation, 2001
This article examines the status of behavioral research on genetic mental retardation syndromes and finds that the field continues to struggle with three methodological issues: (1) how to think about control or contrast groups, (2) the interplay of behavioral phenotypes with development and other within-group variations, and (3) the efficacy of…
Descriptors: Adults, Behavior Patterns, Behavioral Science Research, Children
Coalition for Evidence-Based Policy, 2005
This is a checklist of key items to get right when conducting a randomized controlled trial to evaluate an educational program or practice ("intervention"). It is intended as a practical resource for researchers and sponsors of research, describing items that are often critical to the success of a randomized controlled trial. A significant…
Descriptors: Educational Research, Program Evaluation, Intervention, Scientific Methodology