NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Assessments and Surveys
National Assessment of…1
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tucker, Susan; Stevahn, Laurie; King, Jean A. – American Journal of Evaluation, 2023
This article compares the purposes and content of the four foundational documents of the American Evaluation Association (AEA): the Program Evaluation Standards, the AEA Public Statement on Cultural Competence in Evaluation, the AEA Evaluator Competencies, and the AEA Guiding Principles. This reflection on alignment is an early effort in the third…
Descriptors: Professionalism, Comparative Analysis, Professional Associations, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Robin Lin; King, Jean A.; Mark, Melvin M.; Caracelli, Valerie – American Journal of Evaluation, 2016
Over the past 14 years, AEA's Oral History Project Team (Robin Lin Miller, Jean A. King, Valerie Caracelli, and Melvin M. Mark) has conducted interviews with individuals who have made signal contributions to evaluation theory and practice, tracing their professional development and contextualizing their work within the social and political…
Descriptors: Oral History, Faculty Development, Psychometrics, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Hakkola, Leah; King, Jean A. – Innovative Higher Education, 2016
In this article we describe the Graduate Review and Improvement Process (GRIP), an innovative evaluation process that makes student input central, now beginning its fifth year of implementation at the University of Minnesota. We begin by contrasting GRIP with traditional graduate program review, and we then explain the conceptual underpinnings of…
Descriptors: Educational Innovation, Evaluation Methods, Student Evaluation, Graduate Study
Peer reviewed Peer reviewed
Direct linkDirect link
Lawrenz, Frances; King, Jean A.; Ooms, Ann – New Directions for Evaluation, 2011
A cross-case analysis of four National Science Foundation (NSF) case studies identified both unique details and common themes related to promoting the use and influence of multisite evaluations. The analysis provided evidence of diverse evaluation use by stakeholders and suggested that people taking part in the multisite evaluations perceived…
Descriptors: Role Perception, Participation, Case Studies, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
King, Jean A. – American Journal of Evaluation, 2010
The author is increasingly sensitive to the Venn diagram of overlapping expertise in which program evaluation intersects a content area such as public health, urban education, or--in the case of Katherine Hay--international development. Many evaluators practice in these areas of overlap, experts in the evaluation of a specific domain. Given her…
Descriptors: Organizational Development, Developing Nations, Intellectual Disciplines, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Kelli; Greenseid, Lija O.; Toal, Stacie A.; King, Jean A.; Lawrenz, Frances; Volkov, Boris – American Journal of Evaluation, 2009
This paper reviews empirical research on the use of evaluation from 1986 to 2005 using Cousins and Leithwood's 1986 framework for categorizing empirical studies of evaluation use conducted since that time. The literature review located 41 empirical studies of evaluation use conducted between 1986 and 2005 that met minimum quality standards. The…
Descriptors: Evaluators, Program Evaluation, Classification, Stakeholders
Peer reviewed Peer reviewed
Direct linkDirect link
King, Jean A. – American Journal of Evaluation, 2008
This excerpt from the opening plenary asks evaluators to consider two questions regarding learning and evaluation: (a) How do evaluators know if, how, when, and what people are learning during an evaluation? and (b) In what ways can evaluation be a learning experience? To answer the first question, evaluators can apply the commonplaces of…
Descriptors: Evaluators, Program Evaluation, Learning Experience, Program Development
Peer reviewed Peer reviewed
Direct linkDirect link
King, Jean A.; Rohmer-Hirt, Johnna A. – New Directions for Evaluation, 2011
From the 1980s to the present, educational accountability in the United States has grown dramatically. Such accountability in U.S. school districts, although driven primarily by external demands, has internal manifestations as well. The chapter traces the historical development of internal evaluation in American school districts, then highlights…
Descriptors: Public Schools, Testing Programs, Standardized Tests, School Districts
Peer reviewed Peer reviewed
Direct linkDirect link
King, Jean A.; Cousins, J. Bradley; Whitmore, Elizabeth – New Directions for Evaluation, 2007
This chapter begins with a commentary by King, a longtime admirer of Cousins and Whitmore, in which she discusses why their 1998 article on participatory evaluation is considered an important contribution to the field. Participatory evaluation was not a new idea in 1998. By the mid-1990s articles, chapters, and books that described evaluations…
Descriptors: Evaluation Methods, Evaluators, Action Research, Participatory Research
Peer reviewed Peer reviewed
Direct linkDirect link
Toal, Stacie A.; King, Jean A.; Johnson, Kelli; Lawrenz, Frances – Evaluation and Program Planning, 2009
As the number of large federal programs increases, so, too, does the need for a more complete understanding of how to conduct evaluations of such complex programs. The research literature has documented the benefits of stakeholder participation in smaller-scale program evaluations. However, given the scope and diversity of projects in multi-site…
Descriptors: Evaluators, Program Evaluation, Federal Programs, Stakeholders
Peer reviewed Peer reviewed
Direct linkDirect link
King, Jean A. – New Directions for Evaluation, 2007
This article discusses how to make process use an independent variable in evaluation practice: the purposeful means of building an organization's capacity to conduct and use evaluations in the long run. The goal of evaluation capacity building (ECB) is to strengthen and sustain effective program evaluation practices through a number of activities:…
Descriptors: Evaluators, Program Evaluation, Predictor Variables, Evaluation Methods
King, Jean A.; Thompson, Bruce – 1981
This 23-item bibliography includes a glossary of key terms and summarizes major works on the use of program evaluation information. Summarized works include books, papers presented at professional meetings, and doctoral dissertations. The summaries are organized around a framework of themes: studies of evaluation utilization, practical discussions…
Descriptors: Annotated Bibliographies, Information Utilization, Program Evaluation, Theories
Thompson, Bruce; King, Jean A. – 1981
The literature on program evaluation contains numerous suggestions that evaluative information is frequently underutilized or inappropriately utilized by administrators. This paper reviews the literature on utilization with a view toward identifying workable strategies for optimizing appropriate use. Specific recommendations to evaluators include…
Descriptors: Change Strategies, Evaluators, Information Utilization, Program Evaluation
Peer reviewed Peer reviewed
King, Jean A.; Thompson, Bruce – NASSP Bulletin, 1983
A survey of principals and superintendents from variously sized districts revealed that program evaluations were generally found useful (particularly by superintendents) but were judged by many principals to be incapable of measuring some important program effects. Administrators tended to interact infrequently with evaluators, placing primary…
Descriptors: Administrator Attitudes, Elementary Secondary Education, Information Utilization, Principals
Peer reviewed Peer reviewed
Direct linkDirect link
Alkin, Marvin C.; Christie, Christina A.; Greene, Jennifer C.; Henry, Gary T.; Donaldson, Stewart I.; King, Jean A. – New Directions for Evaluation, 2005
The editors give each of the theorists a chance to respond to questions posed about the context of the situation in relation to their own experience in the field, exploring how the exercise had an impact on their evaluation designs.
Descriptors: Program Evaluation, Evaluation Methods, Context Effect, Evaluators
Previous Page | Next Page ยป
Pages: 1  |  2