Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 19 |
Descriptor
Error of Measurement | 50 |
Program Evaluation | 50 |
Research Methodology | 14 |
Evaluation Methods | 12 |
Statistical Analysis | 11 |
Program Effectiveness | 10 |
Control Groups | 9 |
Research Design | 9 |
Experimental Groups | 7 |
Regression (Statistics) | 7 |
Correlation | 6 |
More ▼ |
Source
Author
Coffman, William E. | 2 |
Mohr, L. B. | 2 |
Raudenbush, Stephen W. | 2 |
Reardon, Sean F. | 2 |
Schaeffer, Gary A. | 2 |
Andru, Peter | 1 |
Axelson, Julein M. | 1 |
Ayala-Nunes, Lara | 1 |
Bamezai, Anil | 1 |
Barker, Pierce | 1 |
Bloom, Howard S. | 1 |
More ▼ |
Publication Type
Education Level
Elementary Secondary Education | 6 |
Higher Education | 5 |
Postsecondary Education | 4 |
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 3 | 1 |
Grade 5 | 1 |
Grade 9 | 1 |
High Schools | 1 |
Middle Schools | 1 |
Preschool Education | 1 |
More ▼ |
Audience
Researchers | 2 |
Practitioners | 1 |
Laws, Policies, & Programs
Aid to Families with… | 1 |
Elementary and Secondary… | 1 |
Assessments and Surveys
Metropolitan Achievement Tests | 1 |
What Works Clearinghouse Rating
So, Julia Wai-Yin – Assessment Update, 2023
In this article, Julia So discusses the purpose of program assessment, four common missteps of program assessment and reporting, and how to prevent them. The four common missteps of program assessment and reporting she has observed are: (1) unclear or ambiguous program goals; (2) measurement error of program goals and outcomes; (3) incorrect unit…
Descriptors: Program Evaluation, Community Colleges, Evaluation Methods, Objectives
Stella Y. Kim; Carl Westine; Tong Wu; Derek Maher – Journal of College Student Retention: Research, Theory & Practice, 2024
The primary purpose of this study is to validate a student engagement measure for its use in evaluation of a learning assistant (LA) program. A series of psychometric evaluations were made for both the original scale of Higher Education Student Engagement Scale (HESES) and its adapted version designed to be used in gauging the effectiveness of…
Descriptors: Learner Engagement, Teaching Assistants, Test Validity, Test Reliability
Rebecca Walcott; Isabelle Cohen; Denise Ferris – Evaluation Review, 2024
When and how to survey potential respondents is often determined by budgetary and external constraints, but choice of survey modality may have enormous implications for data quality. Different survey modalities may be differentially susceptible to measurement error attributable to interviewer assignment, known as interviewer effects. In this…
Descriptors: Surveys, Research Methodology, Error of Measurement, Interviews
Litwok, Daniel; Peck, Laura R. – American Journal of Evaluation, 2019
In experimental evaluations of policy interventions, the so-called Bloom adjustment is commonly used to estimate the impact of the treatment on the treated. It does so by rescaling the estimated impact of the intention to treat--that is, the overall treatment-control group difference in outcomes for the entire experimental sample--by the…
Descriptors: Computation, Outcomes of Treatment, Program Evaluation, Scaling
Robinson, Lauren; Dudensing, Rebekka; Granovsky, Nancy L. – Journal of Extension, 2016
Program evaluation often suffers due to time constraints, imperfect instruments, incomplete data, and the need to report standardized metrics. This article about the evaluation process for the Wi$eUp financial education program showcases the difficulties inherent in evaluation and suggests best practices for assessing program effectiveness. We…
Descriptors: Evaluation Methods, Evaluation Research, Error of Measurement, Money Management
Ayala-Nunes, Lara; Jiménez, Lucía; Hidalgo, Victoria; Dekovic, Maja; Jesus, Saul – Research on Social Work Practice, 2018
Objective: The measurement of Family Feedback on Child Welfare Services (FF-CWS) is gaining prominence as an efficacy indicator and is coherent with concerns about family-centered practice and empowerment. The aim of this study was to develop and validate an instrument that would overcome the scarcity of psychometrically sound measures in this…
Descriptors: Feedback (Response), Error of Measurement, Validity, Child Welfare
Yates, Brian T. – New Directions for Evaluation, 2012
The value of a program can be understood as referring not only to outcomes, but also to how those outcomes compare to the types and amounts of resources expended to produce the outcomes. Major potential mistakes and biases in assessing the worth of resources consumed, as well as the value of outcomes produced, are explored. Most of these occur…
Descriptors: Program Evaluation, Cost Effectiveness, Evaluation Criteria, Evaluation Problems
Friedman-Krauss, Allison H.; Connors, Maia C.; Morris, Pamela A. – Society for Research on Educational Effectiveness, 2013
As a result of the 1998 reauthorization of Head Start, the Department of Health and Human Services conducted a national evaluation of the Head Start program. The goal of Head Start is to improve the school readiness skills of low-income children in the United States. There is a substantial body of experimental and correlational research that has…
Descriptors: Early Intervention, Preschool Education, School Readiness, Low Income Groups
What Works Clearinghouse, 2014
This "What Works Clearinghouse Procedures and Standards Handbook (Version 3.0)" provides a detailed description of the standards and procedures of the What Works Clearinghouse (WWC). The remaining chapters of this Handbook are organized to take the reader through the basic steps that the WWC uses to develop a review protocol, identify…
Descriptors: Educational Research, Guides, Intervention, Classification
Raudenbush, Stephen W.; Reardon, Sean F.; Nomi, Takako – Journal of Research on Educational Effectiveness, 2012
Multisite trials can clarify the average impact of a new program and the heterogeneity of impacts across sites. Unfortunately, in many applications, compliance with treatment assignment is imperfect. For these applications, we propose an instrumental variable (IV) model with person-specific and site-specific random coefficients. Site-specific IV…
Descriptors: Program Evaluation, Statistical Analysis, Hierarchical Linear Modeling, Computation
Andru, Peter; Botchkarev, Alexei – Journal of MultiDisciplinary Evaluation, 2011
Background: Return on investment (ROI) is one of the most popular evaluation metrics. ROI analysis (when applied correctly) is a powerful tool of evaluating existing information systems and making informed decisions on the acquisitions. However, practical use of the ROI is complicated by a number of uncertainties and controversies. The article…
Descriptors: Outcomes of Education, Information Systems, School Business Officials, Evaluation Methods
Pullin, Andrew S.; Knight, Teri M. – New Directions for Evaluation, 2009
To use environmental program evaluation to increase effectiveness, predictive power, and resource allocation efficiency, evaluators need good data. Data require sufficient credibility in terms of fitness for purpose and quality to develop the necessary evidence base. The authors examine elements of data credibility using experience from critical…
Descriptors: Data, Credibility, Conservation (Environment), Program Evaluation
Rosch, David M.; Schwartz, Leslie M. – Journal of Leadership Education, 2009
As more institutions of higher education engage in the practice of leadership education, the effective assessment of these efforts lags behind due to a variety of factors. Without an intentional assessment plan, leadership educators are liable to make one or more of several common errors in assessing their programs and activities. This article…
Descriptors: Leadership Training, Administrator Education, College Outcomes Assessment, Program Evaluation
Raudenbush, Stephen W.; Sadoff, Sally – Journal of Research on Educational Effectiveness, 2008
A dramatic shift in research priorities has recently produced a large number of ambitious randomized trials in K-12 education. In most cases, the aim is to improve student academic learning by improving classroom instruction. Embedded in these studies are theories about how the quality of classroom must improve if these interventions are to…
Descriptors: Elementary Secondary Education, Error of Measurement, Statistical Inference, Program Evaluation
Costrell, Robert M. – School Choice Demonstration Project, 2009
In February 2008, the School Choice Demonstration Project (SCDP) issued its first report on the fiscal impact of the Milwaukee Parental Choice Program (MPCP) on taxpayers in Milwaukee and the state of Wisconsin. There are two reasons to update the 2008 report. First, the figures will naturally change with the continuing growth of the voucher…
Descriptors: Funding Formulas, School Choice, Demonstration Programs, Educational Vouchers