NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
So, Julia Wai-Yin – Assessment Update, 2023
In this article, Julia So discusses the purpose of program assessment, four common missteps of program assessment and reporting, and how to prevent them. The four common missteps of program assessment and reporting she has observed are: (1) unclear or ambiguous program goals; (2) measurement error of program goals and outcomes; (3) incorrect unit…
Descriptors: Program Evaluation, Community Colleges, Evaluation Methods, Objectives
Peer reviewed Peer reviewed
Direct linkDirect link
Yates, Brian T. – New Directions for Evaluation, 2012
The value of a program can be understood as referring not only to outcomes, but also to how those outcomes compare to the types and amounts of resources expended to produce the outcomes. Major potential mistakes and biases in assessing the worth of resources consumed, as well as the value of outcomes produced, are explored. Most of these occur…
Descriptors: Program Evaluation, Cost Effectiveness, Evaluation Criteria, Evaluation Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Rosch, David M.; Schwartz, Leslie M. – Journal of Leadership Education, 2009
As more institutions of higher education engage in the practice of leadership education, the effective assessment of these efforts lags behind due to a variety of factors. Without an intentional assessment plan, leadership educators are liable to make one or more of several common errors in assessing their programs and activities. This article…
Descriptors: Leadership Training, Administrator Education, College Outcomes Assessment, Program Evaluation
Newman, Isadore; Fraas, John W. – 1998
Educational researchers often use multiple statistical tests in their research studies and program evaluations. When multiple statistical tests are conducted, the chance that Type I errors may be committed increases. Thus, the researchers are faced with the task of adjusting the alpha levels for their individual statistical tests in order to keep…
Descriptors: Decision Making, Educational Research, Error of Measurement, Program Evaluation
Peer reviewed Peer reviewed
Mohr, L. B. – Evaluation and Program Planning, 2000
Suggests that there is a tendency in social science and program evaluation to adhere to some methodological practices by force of custom rather than because of their reasoned applicability. These ideas include regression artifacts, random measurement error, and change or gain scores. (Author/SLD)
Descriptors: Error of Measurement, Program Evaluation, Regression (Statistics), Research Methodology
Peer reviewed Peer reviewed
Reichardt, Charles S. – Evaluation and Program Planning, 2000
Agrees with L. Mohr that researchers are too quick to assume that measurement error is random, but disagrees that the idea of regression toward the mean has been a distraction and the notion that change scores analysis should be avoided in favor of regression analysis. (SLD)
Descriptors: Error of Measurement, Program Evaluation, Regression (Statistics), Research Methodology
Peer reviewed Peer reviewed
Mohr, L. B. – Evaluation and Program Planning, 2000
Responds to C. S. Reichardt's discussion of regression artifacts, random measurement error, and change scores. Emphasizes that attention to regression artifacts in program evaluation is almost bound to be problematic and proposes some arguments in support of this position. (SLD)
Descriptors: Error of Measurement, Program Evaluation, Regression (Statistics), Research Methodology