NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Qian-Khoo, Joanne Xiaolei; Hiruy, Kiros; Hutton, Rebecca Willow-Anne; Barraket, Jo – American Journal of Evaluation, 2022
Impact evaluation and measurement are highly complex and can pose challenges for both social impact providers and funders. Measuring the impact of social interventions requires the continuous exploration and improvement of evaluation approaches and tools. This article explores the available evidence on meta-evaluation--the "evaluation of…
Descriptors: Meta Analysis, Evaluation Methods, Measurement, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Ellington, Roni; Barajas, Clara B.; Drahota, Amy; Meghea, Cristian; Uphold, Heatherlun; Scott, Jamil B.; Lewis, E. Yvonne; Furr-Holden, C. Debra – American Journal of Evaluation, 2022
Over the last few decades, there has been an increase in the number of large federally funded transdisciplinary programs and initiatives. Scholars have identified a need to develop frameworks, methodologies, and tools to evaluate the effectiveness of these large collaborative initiatives, providing precise ways to understand and assess the…
Descriptors: Evaluation Research, Evaluation Problems, Evaluation Methods, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Robinson, Lauren; Dudensing, Rebekka; Granovsky, Nancy L. – Journal of Extension, 2016
Program evaluation often suffers due to time constraints, imperfect instruments, incomplete data, and the need to report standardized metrics. This article about the evaluation process for the Wi$eUp financial education program showcases the difficulties inherent in evaluation and suggests best practices for assessing program effectiveness. We…
Descriptors: Evaluation Methods, Evaluation Research, Error of Measurement, Money Management
Peer reviewed Peer reviewed
Direct linkDirect link
Stockard, Jean – Current Issues in Education, 2010
A large body of literature documents the central importance of fidelity of program implementation in creating an internally valid research design and considering such fidelity in judgments of research quality. The What Works Clearinghouse (WWC) provides web-based summary ratings of educational innovations and is the only rating group that is…
Descriptors: Research Design, Educational Innovation, Program Implementation, Program Effectiveness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shirbagi, Naser – Quality of Higher Education, 2011
The main purpose of this research is to examine the effectiveness of Student Evaluation of Teaching (SET) from a sample of university teachers' and students' view. The study adopts exploratory descriptive design. Participants of this research were 300 teachers and 600 graduate students from 3 Iranian higher education institutions. A 30-item format…
Descriptors: Higher Education, Student Evaluation of Teacher Performance, Faculty Evaluation, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Wu, Margaret – Educational Measurement: Issues and Practice, 2010
In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…
Descriptors: Testing Programs, Educational Assessment, Measures (Individuals), Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Clauser, Brian E.; Mee, Janet; Baldwin, Su G.; Margolis, Melissa J.; Dillon, Gerard F. – Journal of Educational Measurement, 2009
Although the Angoff procedure is among the most widely used standard setting procedures for tests comprising multiple-choice items, research has shown that subject matter experts have considerable difficulty accurately making the required judgments in the absence of examinee performance data. Some authors have viewed the need to provide…
Descriptors: Standard Setting (Scoring), Program Effectiveness, Expertise, Health Personnel
Peer reviewed Peer reviewed
Direct linkDirect link
Rudd, Andy; Johnson, R. Burke – Studies in Educational Evaluation, 2008
As a result of the federal No Child Left Behind Act (NCLB) of 2002, the field of education has seen a heavy emphasis on the use of "scientifically based research" for designing and testing the effectiveness of new and existing educational programs. According to NCLB, when addressing basic cause and effect questions scientifically based…
Descriptors: Quasiexperimental Design, Scientific Research, Educational Research, Federal Legislation
Xu, Zeyu; Nichols, Austin – National Center for Analysis of Longitudinal Data in Education Research, 2010
The gold standard in making causal inference on program effects is a randomized trial. Most randomization designs in education randomize classrooms or schools rather than individual students. Such "clustered randomization" designs have one principal drawback: They tend to have limited statistical power or precision. This study aims to…
Descriptors: Test Format, Reading Tests, Norm Referenced Tests, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Shi, Yan; Tsang, Mun C. – Educational Research Review, 2008
This is a critical review of methodological issues in the evaluation of adult literacy education programs in the United States. It addresses the key research questions: What are the appropriate methods for evaluating these programs under given circumstances. It identifies 15 evaluation studies that are representative of a range of adult literacy…
Descriptors: Program Effectiveness, Adult Literacy, Adult Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
McLeod, Bryce D.; Southam-Gerow, Michael A.; Weisz, John R. – School Psychology Review, 2009
This special series focused on treatment integrity in the child mental health and education field is timely. The articles do a laudable job of reviewing (a) the current status of treatment integrity research and measurement, (b) existing conceptual models of treatment integrity, and (c) the limitations of prior research. Overall, this thoughtful…
Descriptors: Evaluation Research, Children, Intervention, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Cummings, Rhoda; Maddux, Cleborne D.; Richmond, Aaron – Assessment & Evaluation in Higher Education, 2008
Increasingly, institutions of higher education are required to evaluate student progress and programme effectiveness through implementation of performance assessment practices. Faculty members frequently resist performance assessment because of concerns that assessment activities will increase workloads, reduce time for scholarly activities,…
Descriptors: Educational Psychology, Performance Based Assessment, Performance Tests, Program Effectiveness
Tarr, James E.; Ross, Daniel J.; McNaught, Melissa D.; Chavez, Oscar; Grouws, Douglas A.; Reys, Robert E.; Sears, Ruthmae; Taylan, R. Didem – Online Submission, 2010
The Comparing Options in Secondary Mathematics: Investigating Curriculum (COSMIC) project is a longitudinal study of student learning from two types of mathematics curricula: integrated and subject-specific. Previous large-scale research studies such as the National Assessment of Educational Progress (NAEP) indicate that numerous variables are…
Descriptors: Mathematics Education, Teacher Characteristics, Mathematics Achievement, Program Effectiveness
Patrinos, Harry Anthony; Fasih, Tazeen; Barrera, Felipe; Garcia-Moreno, Vicente A.; Bentaouet-Kattan, Raja; Baksh, Shaista; Wickramasekera, Inosha – World Bank Publications, 2007
Impact evaluations of school-based management (SBM) programs, or any other kind of program, are important because they can demonstrate whether or not the program has accomplished its objectives. Furthermore, these evaluations can identify ways to improve the design of the program. These evaluations can also make successful interventions…
Descriptors: School Based Management, Program Effectiveness, Program Evaluation, Evaluation Research
Huberty, Carl J.; Klein, Gerald A. – 1992
The thesis of this paper is that, in evaluating most innovative educational projects that are conducted in schools, tenets of formal experimental design and associated inferential data analysis methods should be given limited emphasis. The basis of this thesis lies in the problems and difficulties that undermine the design and implementation of…
Descriptors: Data Collection, Elementary Secondary Education, Evaluation Criteria, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1  |  2