NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Location
New York1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Rebecca Walcott; Isabelle Cohen; Denise Ferris – Evaluation Review, 2024
When and how to survey potential respondents is often determined by budgetary and external constraints, but choice of survey modality may have enormous implications for data quality. Different survey modalities may be differentially susceptible to measurement error attributable to interviewer assignment, known as interviewer effects. In this…
Descriptors: Surveys, Research Methodology, Error of Measurement, Interviews
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Friedman-Krauss, Allison H.; Connors, Maia C.; Morris, Pamela A. – Society for Research on Educational Effectiveness, 2013
As a result of the 1998 reauthorization of Head Start, the Department of Health and Human Services conducted a national evaluation of the Head Start program. The goal of Head Start is to improve the school readiness skills of low-income children in the United States. There is a substantial body of experimental and correlational research that has…
Descriptors: Early Intervention, Preschool Education, School Readiness, Low Income Groups
Peer reviewed Peer reviewed
Direct linkDirect link
Andru, Peter; Botchkarev, Alexei – Journal of MultiDisciplinary Evaluation, 2011
Background: Return on investment (ROI) is one of the most popular evaluation metrics. ROI analysis (when applied correctly) is a powerful tool of evaluating existing information systems and making informed decisions on the acquisitions. However, practical use of the ROI is complicated by a number of uncertainties and controversies. The article…
Descriptors: Outcomes of Education, Information Systems, School Business Officials, Evaluation Methods
Rosenthal, James A. – Springer, 2011
Written by a social worker for social work students, this is a nuts and bolts guide to statistics that presents complex calculations and concepts in clear, easy-to-understand language. It includes numerous examples, data sets, and issues that students will encounter in social work practice. The first section introduces basic concepts and terms to…
Descriptors: Statistics, Data Interpretation, Social Work, Social Science Research
Maynard, Rebecca; Dong, Nianbo – Society for Research on Educational Effectiveness, 2009
This study empirically investigates the effectiveness of Distributed Leadership Teacher Training (DLT) program on improving student's academic achievement. In addition, it both tests the assumption that the year 1 impacts are stable across calendar years and examines the importance of properly accounting for the fact that the standard error of the…
Descriptors: Urban Schools, Middle School Students, Elementary School Students, Sample Size
Reardon, Sean F. – Education and the Public Interest Center, 2009
"How New York City's Charter Schools Affect Achievement" estimates the effects on student achievement of attending a New York City charter school rather than a traditional public school and investigates the characteristics of charter schools associated with the most positive effects on achievement. Because the report relies on an…
Descriptors: Charter Schools, Academic Achievement, Achievement Gains, Achievement Rating
Peer reviewed Peer reviewed
Mohr, L. B. – Evaluation and Program Planning, 2000
Suggests that there is a tendency in social science and program evaluation to adhere to some methodological practices by force of custom rather than because of their reasoned applicability. These ideas include regression artifacts, random measurement error, and change or gain scores. (Author/SLD)
Descriptors: Error of Measurement, Program Evaluation, Regression (Statistics), Research Methodology
Peer reviewed Peer reviewed
Reichardt, Charles S. – Evaluation and Program Planning, 2000
Agrees with L. Mohr that researchers are too quick to assume that measurement error is random, but disagrees that the idea of regression toward the mean has been a distraction and the notion that change scores analysis should be avoided in favor of regression analysis. (SLD)
Descriptors: Error of Measurement, Program Evaluation, Regression (Statistics), Research Methodology
Peer reviewed Peer reviewed
Mohr, L. B. – Evaluation and Program Planning, 2000
Responds to C. S. Reichardt's discussion of regression artifacts, random measurement error, and change scores. Emphasizes that attention to regression artifacts in program evaluation is almost bound to be problematic and proposes some arguments in support of this position. (SLD)
Descriptors: Error of Measurement, Program Evaluation, Regression (Statistics), Research Methodology
Peer reviewed Peer reviewed
Sexton, Thomas R.; And Others – New Directions for Program Evaluation, 1986
Recent methodological advances are described that enable the analyst to extract additional information from the data envelopment analysis (DEA) methodology, including goal programming to develop cross-efficiencies, cluster analysis, analysis of variance, and pooled cross section time-series analysis. Some shortcomings of DEA are discussed. (LMO)
Descriptors: Efficiency, Error of Measurement, Evaluation Methods, Evaluation Problems
Peer reviewed Peer reviewed
Maher, W. A.; And Others – Environmental Monitoring and Assessment, 1994
Presents a general framework for designing sampling programs that ensure cost effectiveness, and managed errors kept within known and acceptable limits. (LZ)
Descriptors: Cost Effectiveness, Environmental Education, Environmental Research, Error of Measurement
Peer reviewed Peer reviewed
Lennox, Richard D.; Dennis, Michael L. – Evaluation and Program Planning, 1994
Potential methods are explored for removing or otherwise controlling random measurement error, assessment artifacts, irrelevant variation in outcome measures, and confounding sources of covariation in a structural equations model. Using examples with measures of quality of life and functioning, the authors consider these methods for field…
Descriptors: Error of Measurement, Field Studies, Measurement Techniques, Models
Rothman, M. L.; And Others – 1982
A practical application of generalizability theory, demonstrating how the variance components contribute to understanding and interpreting the data collected to evaluate a program, is described. The evaluation concerned 120 learning modules developed for the Dental Auxiliary Education Project. The goals of the project were to design, implement,…
Descriptors: Correlation, Data Collection, Dental Schools, Educational Research
Bloom, Howard S.; Michalopoulos, Charles; Hill, Carolyn J.; Lei, Ying – 2002
A study explored which nonexperimental comparison group methods provide the most accurate estimates of the impacts of mandatory welfare-to-work programs and whether the best methods work well enough to substitute for random assignment experiments. Findings were compared for nonexperimental comparison groups and statistical adjustment procedures…
Descriptors: Adult Education, Comparative Analysis, Control Groups, Error of Measurement