NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational Evaluation and…16
Audience
Policymakers1
Location
Laws, Policies, & Programs
Elementary and Secondary…2
Assessments and Surveys
Early Childhood Longitudinal…1
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Zuchao; Curran, F. Chris; You, You; Splett, Joni Williams; Zhang, Huibin – Educational Evaluation and Policy Analysis, 2023
Programs that improve teaching effectiveness represent a core strategy to improve student educational outcomes and close student achievement gaps. This article compiles empirical values of intraclass correlations for designing effective and efficient experimental studies evaluating the effects of these programs. The Early Childhood Longitudinal…
Descriptors: Children, Longitudinal Studies, Surveys, Teacher Empowerment
Peer reviewed Peer reviewed
Direct linkDirect link
Shager, Hilary M.; Schindler, Holly S.; Magnuson, Katherine A.; Duncan, Greg J.; Yoshikawa, Hirokazu; Hart, Cassandra M. D. – Educational Evaluation and Policy Analysis, 2013
This study explores the extent to which differences in research design explain variation in Head Start program impacts. We employ meta-analytic techniques to predict effect sizes for cognitive and achievement outcomes as a function of the type and rigor of research design, quality and type of outcome measure, activity level of control group, and…
Descriptors: Meta Analysis, Preschool Education, Disadvantaged Youth, Outcome Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Song, Mengli; Herman, Rebecca – Educational Evaluation and Policy Analysis, 2010
Drawing on our five years of experience developing WWC evidence standards and reviewing studies against those standards as well as current literature on the design of impact studies, we highlight in this paper some of the most critical issues and common pitfalls in designing and conducting impact studies in education, and provide practical…
Descriptors: Clearinghouses, Program Evaluation, Program Effectiveness, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Spybrook, Jessaca; Raudenbush, Stephen W. – Educational Evaluation and Policy Analysis, 2009
This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…
Descriptors: Research Design, Research Methodology, Effect Size, Correlation
Peer reviewed Peer reviewed
Lindvall, C. Mauritz; Nitko, Anthony J. – Educational Evaluation and Policy Analysis, 1981
A design for evaluation studies of educational programs should provide valid and defensible inferences. Goals of evaluation are the identity of major components of inferences and specific validity concerns. Design problems may be resolved by creatively utilizing features of specific evaluations in designing unique conditions that permit valid…
Descriptors: Educational Assessment, Program Evaluation, Research Design, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
James-Burdumy, Susanne; Dynarski, Mark; Deke, John – Educational Evaluation and Policy Analysis, 2007
This article presents evidence from a national evaluation of the effects of 21st Century Community Learning Center after-school programs. The study was conducted in 12 school districts and 26 after-school centers, at which 2,308 elementary school students who were interested in attending a center were randomly assigned either to the treatment or…
Descriptors: Control Groups, Elementary School Students, After School Programs, Community Centers
Peer reviewed Peer reviewed
Fetterman, David M. – Educational Evaluation and Policy Analysis, 1982
The design and conduct of a national evaluation study is discussed, demonstrating that a control group may not provide the no-cause baseline information expected. Resolution of this problem requires reexamination of paradigms, research practices, and policies, as well as the underlying real world constraints and views that generate them. (PN)
Descriptors: Dropout Research, Educational Research, Ethics, Ethnography
Peer reviewed Peer reviewed
Magidson, Jay; Sorbom, Dag – Educational Evaluation and Policy Analysis, 1982
LISREL V computer program is applied to a weak quasi-experimental design involving the Head Start program, as a multiple analysis attempt to assure that differences between nonequivalent control groups do not confound interpretation of a posteriori differences. (PN)
Descriptors: Achievement Gains, Early Childhood Education, Mathematical Models, Program Evaluation
Peer reviewed Peer reviewed
Bolland, John M.; Bolland, Kathleen A. – Educational Evaluation and Policy Analysis, 1984
While policy analysis is currently the most popular research strategy for educational administration, it is more effective when used in conjunction with program evaluation. The relationship between these methods is illustrated by examining alternative research designs for a hypothetical school drug abuse program. (BS)
Descriptors: Drug Education, Educational Policy, Elementary Secondary Education, Evaluation Methods
Peer reviewed Peer reviewed
St. Pierre, Robert G. – Educational Evaluation and Policy Analysis, 1979
A model is described which illustrates two general uses of multiple analyses to evaluate quasi-experiments: obtaining estimates of the treatment effect, and checking the validity of the experiment. (MH)
Descriptors: Data Analysis, Evaluation Methods, Literature Reviews, Models
Peer reviewed Peer reviewed
Lee, Barbara – Educational Evaluation and Policy Analysis, 1985
ifferent evaluation models were applied to data from a high school career education program to investigate problems in statistical conclusion validity and program effectiveness judgments. If potential threats to internal validity are analyzed and protection strategies are developed, more confidence in unplanned ex post facto design using a…
Descriptors: Career Education, Comparative Analysis, Evaluation Methods, High Schools
Peer reviewed Peer reviewed
Gersten, Russell – Educational Evaluation and Policy Analysis, 1984
Conclusion about site-specific determinants of educational outcomes of the controversial longitudinal evaluation of the Follow Through programs (1976) are challenged again by reexamining the Direct Instruction model data. Reanalysis confirms earlier critiques and supports more recent research funding on the relationship between types of…
Descriptors: Educational Environment, Elementary Education, Evaluation Methods, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Moss, Brian G.; Yeaton, William H. – Educational Evaluation and Policy Analysis, 2006
Utilizing the regression-discontinuity research design, this article explores the effectiveness of a developmental English program in a large, multicampus community college. Routinely collected data were extracted from existing records of a cohort of first-time college students followed for approximately 6 years (N = 1,473). Results are consistent…
Descriptors: Research Design, Program Evaluation, Developmental Studies Programs, Policy Formation
Peer reviewed Peer reviewed
Linn, Robert L. – Educational Evaluation and Policy Analysis, 1979
The internal validity of the RMC models, especially Model A, is examined. Concern centers on limiting evaluation to cognitive outcomes, using constant percentile as the no-treatment expectation, and using norms for one test to establish the expected no-treatment performance level for another test. (MH)
Descriptors: Achievement Gains, Compensatory Education, Elementary Education, Evaluation Methods
Peer reviewed Peer reviewed
Wisler, Carl E.; Anderson, Janice K. – Educational Evaluation and Policy Analysis, 1979
Seven considerations in designing the Title I Evaluation and Reporting System are reviewed: satisfying state, local, and federal needs; summative evaluation; differences in Title I subprograms; local use of different achievement tests; comparative analysis data; use of approved evaluation designs; and use of a common reporting metric. (MH)
Descriptors: Achievement Tests, Compensatory Education, Elementary Secondary Education, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1  |  2