Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 3 |
Descriptor
Source
Evaluation Review | 3 |
Education and the Public… | 1 |
Evaluation and Program… | 1 |
Journal of MultiDisciplinary… | 1 |
New Directions for Evaluation | 1 |
New Directions for Program… | 1 |
Online Submission | 1 |
Author
Schaeffer, Gary A. | 2 |
Andru, Peter | 1 |
Bamezai, Anil | 1 |
Bloom, Howard S. | 1 |
Botchkarev, Alexei | 1 |
Christie, Samuel G. | 1 |
Conniff, William A. | 1 |
Dennis, Michael L. | 1 |
Hill, Carolyn J. | 1 |
Khoo, Siek-Toon | 1 |
Knight, Teri M. | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 15 |
Journal Articles | 7 |
Speeches/Meeting Papers | 4 |
Numerical/Quantitative Data | 1 |
Education Level
Elementary Secondary Education | 2 |
Grade 5 | 1 |
Higher Education | 1 |
Audience
Researchers | 1 |
Location
New York | 1 |
Laws, Policies, & Programs
Aid to Families with… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Andru, Peter; Botchkarev, Alexei – Journal of MultiDisciplinary Evaluation, 2011
Background: Return on investment (ROI) is one of the most popular evaluation metrics. ROI analysis (when applied correctly) is a powerful tool of evaluating existing information systems and making informed decisions on the acquisitions. However, practical use of the ROI is complicated by a number of uncertainties and controversies. The article…
Descriptors: Outcomes of Education, Information Systems, School Business Officials, Evaluation Methods
Pullin, Andrew S.; Knight, Teri M. – New Directions for Evaluation, 2009
To use environmental program evaluation to increase effectiveness, predictive power, and resource allocation efficiency, evaluators need good data. Data require sufficient credibility in terms of fitness for purpose and quality to develop the necessary evidence base. The authors examine elements of data credibility using experience from critical…
Descriptors: Data, Credibility, Conservation (Environment), Program Evaluation
Reardon, Sean F. – Education and the Public Interest Center, 2009
"How New York City's Charter Schools Affect Achievement" estimates the effects on student achievement of attending a New York City charter school rather than a traditional public school and investigates the characteristics of charter schools associated with the most positive effects on achievement. Because the report relies on an…
Descriptors: Charter Schools, Academic Achievement, Achievement Gains, Achievement Rating
Khoo, Siek-Toon; Muthen, Bengt – 1997
The aim of this paper is to explore methods for evaluating the effects of randomized interventions in a longitudinal design. The focus is on methods for modeling the possibly nonlinear relationship between treatment effect and baseline and evaluating the treatment effect taking this nonlinear relationship into account. A control/treatment growth…
Descriptors: Error of Measurement, Evaluation Methods, Interaction, Longitudinal Studies

Bamezai, Anil – Evaluation Review, 1995
Some of the threats to internal validity that arise when evaluating the impact of water conservation programs during a drought are illustrated. These include differential response to the drought, self-selection bias, and measurement error. How to deal with these problems when high-quality disaggregate data are available is discussed. (SLD)
Descriptors: Conservation (Environment), Drought, Error of Measurement, Evaluation Methods

Sexton, Thomas R.; And Others – New Directions for Program Evaluation, 1986
Recent methodological advances are described that enable the analyst to extract additional information from the data envelopment analysis (DEA) methodology, including goal programming to develop cross-efficiencies, cluster analysis, analysis of variance, and pooled cross section time-series analysis. Some shortcomings of DEA are discussed. (LMO)
Descriptors: Efficiency, Error of Measurement, Evaluation Methods, Evaluation Problems

Schaeffer, Gary A.; And Others – Evaluation Review, 1986
The reliability of criterion-referenced tests (CRTs) used in health program evaluation can be conceptualized in different ways. Formulas are presented for estimating appropriate standard error of measurement (SEM) for CRTs. The SEM can be used in computing confidence intervals for domain score estimates and for a cut-score. (Author/LMO)
Descriptors: Accountability, Criterion Referenced Tests, Cutting Scores, Error of Measurement
Christie, Samuel G.; Conniff, William A. – 1981
Stockton Unified School District's successful strategy of using a locally developed, non-normed achievement test to implement the norm referenced model of the Title I Evaluation and Reporting System (Model A2) is described. Documented are the procedures involved in the development of a curriculum guide and test items, and the administration of…
Descriptors: Achievement Tests, Elementary Secondary Education, Error of Measurement, Norm Referenced Tests

Lennox, Richard D.; Dennis, Michael L. – Evaluation and Program Planning, 1994
Potential methods are explored for removing or otherwise controlling random measurement error, assessment artifacts, irrelevant variation in outcome measures, and confounding sources of covariation in a structural equations model. Using examples with measures of quality of life and functioning, the authors consider these methods for field…
Descriptors: Error of Measurement, Field Studies, Measurement Techniques, Models
Schaeffer, Gary A.; And Others – 1984
The reliability of criterion referenced tests, which are often used to evaluate health education programs, may be conceptualized in different ways. Classical conceptualizations of test reliability have limited usefulness when applied to health-related criterion referenced tests. When a cutting score is set, test reliability can be represented as…
Descriptors: Correlation, Criterion Referenced Tests, Cutting Scores, Elementary Secondary Education
Schumacker, Randall E. – 1992
The regression-discontinuity approach to evaluating educational programs is reviewed, and regression-discontinuity post-program mean differences under various conditions are discussed. The regression-discontinuity design is used to determine whether post-program differences exist between an experimental program and a control group. The difference…
Descriptors: Comparative Analysis, Computer Simulation, Control Groups, Cutting Scores
Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar – Online Submission, 2005
The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…
Descriptors: Program Evaluation, Programming, Mathematical Applications, Measurement Techniques

Press, S. James; Tanur, Judith M. – Evaluation Review, 1991
Relevance of the intersection of sociology, statistics, and public policy to the study of quality control in three family assistance programs--food stamps, Aid to Families with Dependent Children (AFDC), and Medicaid--is reviewed using a study by the National Academy of Sciences of methods for improving quality control systems. (SLD)
Descriptors: Error of Measurement, Estimation (Mathematics), Federal Aid, Federal Programs
Bloom, Howard S.; Michalopoulos, Charles; Hill, Carolyn J.; Lei, Ying – 2002
A study explored which nonexperimental comparison group methods provide the most accurate estimates of the impacts of mandatory welfare-to-work programs and whether the best methods work well enough to substitute for random assignment experiments. Findings were compared for nonexperimental comparison groups and statistical adjustment procedures…
Descriptors: Adult Education, Comparative Analysis, Control Groups, Error of Measurement
California Univ., Los Angeles. Center for the Study of Evaluation. – 1985
This document contains three papers developed by the Center for the Study of Evaluation's (CSE's) Research into Practice Project. The first paper, "A Process for Designing and Implementing a Dual Purpose Evaluation System," by Pamela Aschbacher and James Burry, provides a model for evaluating programs for two purposes simultaneously: (1) program…
Descriptors: Accountability, Criterion Referenced Tests, Educational Assessment, Educational Technology