NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)9
Assessments and Surveys
National Assessment of…1
What Works Clearinghouse Rating
Showing 1 to 15 of 45 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Malouf, David B.; Taymans, Juliana M. – Educational Researcher, 2016
An analysis was conducted of the What Works Clearinghouse (WWC) research evidence base on the effectiveness of replicable education interventions. Most interventions were found to have little or no support from technically adequate research studies, and intervention effect sizes were of questionable magnitude to meet education policy goals. These…
Descriptors: Evidence Based Practice, Program Effectiveness, Intervention, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kekahio, Wendy; Baker, Myriam – Regional Educational Laboratory Pacific, 2013
Using data strategically to guide decisions and actions can have a positive effect on education practices and processes. This facilitation guide shows education data teams how to move beyond simply reporting data to applying data to direct strategic action. Using guiding questions, suggested activities, and activity forms, this guide provides…
Descriptors: Research Utilization, Data Analysis, Strategic Planning, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Bers, Trudy – New Directions for Institutional Research, 2012
Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…
Descriptors: Benchmarking, Community Colleges, Accountability, Surveys
Rymer, Les – Group of Eight (NJ1), 2011
Current economic conditions and the increasing competition for government funding are leading to an increased focus on the impact of research. Measuring the impact of research is difficult because not all impacts are direct and some can be negative or result from the identification of problems that require a non-research response. The time between…
Descriptors: Foreign Countries, Program Effectiveness, Research Utilization, Measurement
Grant, Jonathan; Brutscher, Philipp-Bastian; Kirk, Susan Ella; Butler, Linda; Wooding, Steven – RAND Corporation, 2010
In February 2009, the Higher Education Funding Council for England (HEFCE) commissioned RAND Europe to review approaches to evaluating the impact of research as part of their wider work programme to develop new arrangements for the assessment and funding of research--referred to as the Research Excellence Framework (REF). The objectives were 1) to…
Descriptors: Higher Education, Research Utilization, Scoring, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Blaich, Charles F.; Wise, Kathleen S. – New Directions for Institutional Research, 2010
Most assessment arguments are about measurement. When is it better to use direct versus indirect measures of student learning? Is one standardized test of critical thinking better than another? Is applying rubrics to student work better than using standardized tests? How valid are self-reported measures of learning? Although these arguments are…
Descriptors: Standardized Tests, Academic Achievement, Program Effectiveness, Liberal Arts
Grisham-Brown, Jennifer; Pretti-Frontczak, Kristie – Brookes Publishing Company, 2011
To ensure the best possible outcomes for young children with and without disabilities, early childhood educators must enter the classroom ready to conduct all types of early childhood assessment--including determining if children need additional services, planning and monitoring instruction, and determining program effectiveness. They'll get the…
Descriptors: Play, Textbooks, Program Evaluation, Early Childhood Education
Peer reviewed Peer reviewed
Conley, David T. – Educational Leadership, 1987
Presents common attributes of effective evaluation systems, which are drawn from studies in Arizona, Michigan, and Vermont. Attributes include participant acceptance of validity, participant understanding of system mechanics and rationale, properly trained evaluators, a variety of evaluation methods, and evaluation as a district priority. (CJH)
Descriptors: Elementary Secondary Education, Evaluation Criteria, Evaluation Methods, Instructional Improvement
McLaughlin, Donald H. – 1977
This report focuses on the results of approximately twenty central studies of compensatory education completed before 1977 and presents the major results of those studies as they relate to important policy questions for Title I of the Elementary and Secondary Education Act. The results are presented first as they relate to the major tasks of Title…
Descriptors: Compensatory Education, Educational Policy, Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Lipsey, Mark W. – New Directions for Evaluation, 1997
It is argued that, although thousands of evaluations have been conducted of social interventions, little has been done to cumulate those results to guide intervention architects. Building social intervention theory and meta analyses are suggested as ways to unify this knowledge and make it useful. (SLD)
Descriptors: Evaluation Methods, Intervention, Knowledge Level, Meta Analysis
Peer reviewed Peer reviewed
Pepper, Kaye; Hare, Dwight – Studies in Educational Evaluation, 1999
Developed an evaluation based on R. Stake's Countenance Model of Program Evaluation (1967) to evaluate the Senior Block Field Experience Program at Mississippi State University, a teacher education program. Results identify strengths and weaknesses in all components of the program. (SLD)
Descriptors: Education Majors, Educational Research, Evaluation Methods, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Stuart, Elizabeth A. – Educational Researcher, 2007
Education researchers, practitioners, and policymakers alike are committed to identifying interventions that teach students more effectively. Increased emphasis on evaluation and accountability has increased desire for sound evaluations of these interventions; and at the same time, school-level data have become increasingly available. This article…
Descriptors: Research Methodology, Computation, Causal Models, Intervention
Intercultural Development Research Association, San Antonio, TX. – 1977
This analysis reveals critical weaknesses surrounding the theoretical basis for the AIR (American Institutes for Research) evaluation design. It poses questions concerning the evaluation methodology. Specifically, it identifies major discrepancies in the identification of the target population, the selection of comparable control groups, test…
Descriptors: Bilingual Education, Bilingual Students, Bilingualism, Evaluation Methods
Hawkins, Evelyn K.; And Others – 1996
The Evaluation of Worker Profiling and Reemployment Services (WPRS) systems was designed to provide the U.S. Department of Labor information on how states are designing, implementing, and operating their worker profiling and reemployment services systems for dislocated workers and to compare the effectiveness of different state approaches to…
Descriptors: Adults, Dislocated Workers, Employment Programs, Evaluation Methods
Peer reviewed Peer reviewed
Rossi, Peter H.; Berk, Richard A. – Human Organization, 1981
Provides a detailed introduction to the variety of purposes for which evaluation research may be used and to the range of methods currently employed in the practice of that field. Uses specific examples, where appropriate, to provide concrete illustrations of both the goals of evaluation researchers and the methods used. (Author)
Descriptors: Accountability, Evaluation Methods, Methods Research, Policy Formation
Previous Page | Next Page ยป
Pages: 1  |  2  |  3