NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational Evaluation and…10
Audience
Policymakers1
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Longitudinal…1
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Zuchao; Curran, F. Chris; You, You; Splett, Joni Williams; Zhang, Huibin – Educational Evaluation and Policy Analysis, 2023
Programs that improve teaching effectiveness represent a core strategy to improve student educational outcomes and close student achievement gaps. This article compiles empirical values of intraclass correlations for designing effective and efficient experimental studies evaluating the effects of these programs. The Early Childhood Longitudinal…
Descriptors: Children, Longitudinal Studies, Surveys, Teacher Empowerment
Mark W. Lipsey; Christina Weiland; Hirokazu Yoshikawa; Sandra Jo Wilson; Kerry G. Hofer – Educational Evaluation and Policy Analysis, 2015
Much of the currently available evidence on the causal effects of public prekindergarten programs on school readiness outcomes comes from studies that use a regression-discontinuity design (RDD) with the age cutoff to enter a program in a given year as the basis for assignment to treatment and control conditions. Because the RDD has high internal…
Descriptors: Preschool Education, Preschool Children, School Readiness, School Entrance Age
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Mengli; Herman, Rebecca – Educational Evaluation and Policy Analysis, 2010
Drawing on our five years of experience developing WWC evidence standards and reviewing studies against those standards as well as current literature on the design of impact studies, we highlight in this paper some of the most critical issues and common pitfalls in designing and conducting impact studies in education, and provide practical…
Descriptors: Clearinghouses, Program Evaluation, Program Effectiveness, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Zhu, Pei; Jacob, Robin; Bloom, Howard; Xu, Zeyu – Educational Evaluation and Policy Analysis, 2012
This paper provides practical guidance for researchers who are designing and analyzing studies that randomize schools--which comprise three levels of clustering (students in classrooms in schools)--to measure intervention effects on student academic outcomes when information on the middle level (classrooms) is missing. This situation arises…
Descriptors: Educational Research, Educational Researchers, Research Methodology, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Spybrook, Jessaca; Raudenbush, Stephen W. – Educational Evaluation and Policy Analysis, 2009
This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…
Descriptors: Research Design, Research Methodology, Effect Size, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Allen, Chiharu S.; Chen, Qi; Willson, Victor L.; Hughes, Jan N. – Educational Evaluation and Policy Analysis, 2009
The present meta-analysis examines the effect of grade retention on academic outcomes and investigates systemic sources of variability in effect sizes. Using multilevel modeling (MLM), the authors investigate characteristics of 207 effect sizes across 22 studies published between 1990 and 2007 at two levels: the study (between) and individual…
Descriptors: Research Design, Grade Repetition, Academic Achievement, Effect Size
Peer reviewed Peer reviewed
Lindvall, C. Mauritz; Nitko, Anthony J. – Educational Evaluation and Policy Analysis, 1981
A design for evaluation studies of educational programs should provide valid and defensible inferences. Goals of evaluation are the identity of major components of inferences and specific validity concerns. Design problems may be resolved by creatively utilizing features of specific evaluations in designing unique conditions that permit valid…
Descriptors: Educational Assessment, Program Evaluation, Research Design, Research Methodology
Peer reviewed Peer reviewed
Achilles, Charles M. – Educational Evaluation and Policy Analysis, 1982
Problems that face evaluators, including field based evaluations, are discussed. The author states that open communication, access to data, and an ability to be accepted are paramount; knowledge of randomness, robustness and homoscedasticity may be secondary. (Author/PN)
Descriptors: Data Collection, Evaluation Methods, Evaluation Needs, Interpersonal Communication
Peer reviewed Peer reviewed
Singer, Judith D.; Willet, John B. – Educational Evaluation and Policy Analysis, 1996
Methodological features are suggested for a possible new national longitudinal study of teachers' careers. Six key principles of research design are presented and used to assert that the new study must be truly longitudinal with measurements on at least six occasions over at least 12 years. (SLD)
Descriptors: Careers, Elementary Secondary Education, Longitudinal Studies, National Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Moss, Brian G.; Yeaton, William H. – Educational Evaluation and Policy Analysis, 2006
Utilizing the regression-discontinuity research design, this article explores the effectiveness of a developmental English program in a large, multicampus community college. Routinely collected data were extracted from existing records of a cohort of first-time college students followed for approximately 6 years (N = 1,473). Results are consistent…
Descriptors: Research Design, Program Evaluation, Developmental Studies Programs, Policy Formation