NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)8
Assessments and Surveys
What Works Clearinghouse Rating
Does not meet standards1
Showing 1 to 15 of 38 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sørlie, Mari-Anne; Ogden, Terje – International Journal of School & Educational Psychology, 2014
This paper reviews literature on the rationale, challenges, and recommendations for choosing a nonequivalent comparison (NEC) group design when evaluating intervention effects. After reviewing frequently addressed threats to validity, the paper describes recommendations for strengthening the research design and how the recommendations were…
Descriptors: Validity, Research Design, Experiments, Prevention
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2014
This "What Works Clearinghouse Procedures and Standards Handbook (Version 3.0)" provides a detailed description of the standards and procedures of the What Works Clearinghouse (WWC). The remaining chapters of this Handbook are organized to take the reader through the basic steps that the WWC uses to develop a review protocol, identify…
Descriptors: Educational Research, Guides, Intervention, Classification
Cheung, Alan C. K.; Slavin, Robert E. – Center for Research and Reform in Education, 2011
The use of educational technology in K-12 classrooms has been gaining tremendous momentum across the country since the 1990s. Many school districts have been investing heavily in various types of technology, such as computers, mobile devices, internet access, and interactive whiteboards. Almost all public schools have access to the internet and…
Descriptors: Evidence, Elementary Secondary Education, Mathematics Achievement, Program Effectiveness
Ding, Weili; Lehrer, Steven F. – National Bureau of Economic Research, 2009
This paper introduces an empirical strategy to estimate dynamic treatment effects in randomized trials that provide treatment in multiple stages and in which various noncompliance problems arise such as attrition and selective transitions between treatment and control groups. Our approach is applied to the highly influential four year randomized…
Descriptors: Control Groups, Class Size, Small Classes, Grade 1
Cheung, Alan C. K.; Slavin, Robert E. – Center for Research and Reform in Education, 2011
The present review examines research on the effects of technology use on reading achievement in K-12 classrooms. Unlike previous reviews, this review applies consistent inclusion standards to focus on studies that met high methodological standards. In addition, methodological and substantive features of the studies are investigated to examine the…
Descriptors: Foreign Countries, Evidence, Elementary Secondary Education, Reading Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Kutash, Krista; Banks, Steve; Duchnowski, Albert; Lynn, Nancy – Evaluation and Program Planning, 2007
Evaluating school-based mental health services for children and youth with emotional disturbance (ED) has been a challenge for researchers. One particular challenge is the study design of using the student as the statistical unit of analysis, which in certain cases may lead to a violation of the "independence of error" assumption.…
Descriptors: Emotional Disturbances, Mental Health Programs, Caregivers, Mental Health
Goldman, Jerry – Evaluation Quarterly, 1977
This note suggests a solution to the problem of achieving randomization in experimental settings where units deemed eligible for treatment "trickle in," that is, appear at any time. The solution permits replication of the experiment in order to test for time-dependent effects. (Author/CTM)
Descriptors: Program Evaluation, Research Design, Research Problems, Sampling
Peer reviewed Peer reviewed
Nagel, Stuart S. – Evaluation Review, 1984
Introspective interviewing can often determine the magnitude of relations more meaningfully than statistical analysis. Deduction from empirically validated premises avoids many research design problems. Guesswork can be combined with sensitivity analysis to determine the effects of guesses and missing information on conclusions. (Author/DWH)
Descriptors: Deduction, Evaluation Methods, Intuition, Policy Formation
Peer reviewed Peer reviewed
Luker, William A.; And Others – Journal of Economic Education, 1984
Based on an independent analysis of the data used to evaluate the Developmental Economic Education Program, questions are raised about the methodology and conclusions reached by Walstad and Soper in an article published in the Winter 1982 issue of the Journal. The original study is also defended in two replies. (Author/RM)
Descriptors: Economics Education, Program Evaluation, Research Design, Research Methodology
Peer reviewed Peer reviewed
Alemi, Farrokh – Evaluation Review, 1987
Trade-offs are implicit in choosing a subjective or objective method for evaluating social programs. The differences between Bayesian and traditional statistics, decision and cost-benefit analysis, and anthropological and traditional case systems illustrate trade-offs in choosing methods because of limited resources. (SLD)
Descriptors: Bayesian Statistics, Case Studies, Evaluation Methods, Program Evaluation
Yun, John T. – Education and the Public Interest Center, 2008
A new report published by the Manhattan Institute for Education Policy, "The Effect of Special Education Vouchers on Public School Achievement: Evidence from Florida's McKay Scholarship Program," attempts to examine the complex issue of how competition introduced through school vouchers affects student outcomes in public schools. The…
Descriptors: Evidence, Research Design, Public Schools, Academic Achievement
PDF pending restoration PDF pending restoration
Campbell, Donald T. – 1976
Program impact methodology--usually referred to as evaluation research--is described as it is developing in the United States. Several problems face the field of evaluation research. First, those issues grouped as "meta-scientific" include: (1) the distinction between qualitative and quantitative studies; (2) the separation of implementation and…
Descriptors: Evaluation Methods, Evaluation Problems, Evaluation Research, Program Effectiveness
Conklin, Jonathan E.; Burstein, Leigh – 1979
Educational outcomes are affected by student level, classroom level, and school level characteristics. The fact that educational data are multilevel in nature poses serious analysis questions. Though strong arguments can be made for focusing on a single level of analysis, such studies have several basic limitations: the choice of analytic level…
Descriptors: Analysis of Covariance, Correlation, Data Analysis, Mathematical Models
Goldsamt, Milton R.; And Others – 1983
Third in a series, the monograph summarizes the key evaluation issues, design approaches, and statistical techniques used in conducting the 1980-1983 impact evaluation of Indian Education Act Title IV Part A programs. The monograph describes the major problems in evaluating the program to determine the degree of its positive contribution to…
Descriptors: American Indian Education, Data Analysis, Data Collection, Evaluation Methods
Battelle Memorial Inst., Columbus, OH. Columbus Labs. – 1974
A proposal to monitor and summarize the experience that will be gained during the course of the ATS-F Education Satellite Communications Demonstration is described. The goals of the demonstration and the context in which it must be evaluated are discussed in an introduction. Subsequent sections deal with specific analytic approaches proposed, the…
Descriptors: Communications, Communications Satellites, Evaluation Methods, Program Evaluation
Previous Page | Next Page »
Pages: 1  |  2  |  3