Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 9 |
Descriptor
Credibility | 10 |
Evaluation Methods | 5 |
Evaluators | 5 |
Program Evaluation | 4 |
Ethics | 3 |
Stakeholders | 3 |
Adults | 2 |
Advocacy | 2 |
Data | 2 |
Data Analysis | 2 |
Models | 2 |
More ▼ |
Source
American Journal of Evaluation | 10 |
Author
Publication Type
Journal Articles | 10 |
Reports - Research | 5 |
Reports - Descriptive | 3 |
Book/Product Reviews | 1 |
Opinion Papers | 1 |
Education Level
Elementary Secondary Education | 1 |
Audience
Location
California | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
What Is and What Should Be Needs Assessment Scales: Factors Affecting the Trustworthiness of Results
Altschuld, James W.; Hung, Hsin-Ling; Lee, Yi-Fang – American Journal of Evaluation, 2022
Surveys are frequently employed in needs assessment to collect information about gaps (the needs) in "what is" and "what should be" conditions. Double-scale Likert-type instruments are routinely used for this purpose. Although in accord with the discrepancy definition of need, the quality of such measures is being questioned to…
Descriptors: Needs Assessment, Surveys, Likert Scales, Credibility
Teasdale, Rebecca M.; McNeilly, Jennifer R.; Garzón, Maria Isabel Ramírez; Novak, Judit; Greene, Jennifer C. – American Journal of Evaluation, 2023
This study challenges persistent misrepresentations of evaluation as a value-neutral inquiry process by presenting an empirical study that deepens understanding of evaluators' values and how they "show up" in evaluation practice. Through semistructured interviews and inductive analysis, we examined the values advanced by a sample of…
Descriptors: Evaluators, Values, Evaluation, Ethics
Abraham, Traci H.; Finley, Erin P.; Drummond, Karen L.; Haro, Elizabeth K.; Hamilton, Alison B.; Townsend, James C.; Littman, Alyson J.; Hudson, Teresa – American Journal of Evaluation, 2021
This article outlines a three-phase, team-based approach used to analyze qualitative data from a nation-wide needs assessment of access to Veteran Health Administration services for rural-dwelling veterans. The method described here was used to develop the trustworthiness of findings from analysis of a large qualitative data set, without the use…
Descriptors: Qualitative Research, Credibility, Data, Data Analysis
Jones, Natalie D.; Azzam, Tarek; Wanzer, Dana Linnell; Skousen, Darrel; Knight, Ciara; Sabarre, Nina – American Journal of Evaluation, 2020
One of the most widely used communication tools in evaluation is the logic model. Despite its extensive use, there has been little research into the visualization aspect of the logic model. To assess the impact that design modifications would have on its effectiveness, we applied established visualization principles to revise a program model.…
Descriptors: Logical Thinking, Models, Visualization, Accuracy
Peck, Laura R. – American Journal of Evaluation, 2015
Several analytic strategies exist for opening up the "black box" to reveal more about what drives policy and program impacts. This article focuses on one of these strategies: the Analysis of Symmetrically-Predicted Endogenous Subgroups (ASPES). ASPES uses exogenous baseline data to identify endogenously-defined subgroups, keeping the…
Descriptors: Program Evaluation, Credibility, Prediction, Sample Size
Mason, Sarah; Azzam, Tarek – American Journal of Evaluation, 2019
The connection between evaluation practice and its ultimate goal--social betterment--is indirect. With little or no direct control over social programs and policies, many evaluators rely on the actions of stakeholders to bridge the gap between evaluation practice and its purpose. Consequently, communicating with influence becomes key. The present…
Descriptors: Attitude Change, Influences, Reader Response, Evaluators
Cook, James R. – American Journal of Evaluation, 2015
Program evaluation is generally viewed as a set of mechanisms for collecting and using information to learn about projects, policies and programs, to understand their effects as well as the manner in which they are implemented. AEA has espoused principles for evaluation that place emphasis on competent, honest inquiry that respects the security,…
Descriptors: Program Evaluation, Social Justice, Community Change, Change Strategies
Christie, Christina A. – American Journal of Evaluation, 2008
This article presents an interview with Eric Barela, a K-12 school district internal evaluator who conducted the Title I Best Practices study for the Los Angeles Unified School District (LAUSD), Research and Planning Division (formerly known as the Program Evaluation and Research Branch). In this interview, the author focuses not only on the…
Descriptors: Evaluators, Program Evaluation, Elementary Secondary Education, School Districts
Coryn, Chris L. S.; Hattie, John A.; Scriven, Michael; Hartmann, David J. – American Journal of Evaluation, 2007
This research describes, classifies, and comparatively evaluates national models and mechanisms used to evaluate research and allocate research funding in 16 countries. Although these models and mechanisms vary widely in terms of how research is evaluated and financed, nearly all share the common characteristic of relating funding to some measure…
Descriptors: Ethics, Evaluation Methods, Comparative Analysis, Resource Allocation

House, Ernest R.; Howe, Kenneth R. – American Journal of Evaluation, 1998
Chelimsky, former head of the Program Evaluation and Methodology Division of the General Accounting Office, suggested that advocacy by evaluators destroys their credibility. Evaluators should, this author argues, be advocates for democracy and the public interest, with the question being how explicitly and how defensibly. (SLD)
Descriptors: Advocacy, Credibility, Democracy, Ethics