Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 13 |
Descriptor
Source
American Journal of Evaluation | 7 |
New Directions for Evaluation | 6 |
Evaluation and Program… | 2 |
Journal of Research in… | 2 |
Canadian Journal of Program… | 1 |
Journal of Vocational… | 1 |
Science Education | 1 |
Author
Lawrenz, Frances | 20 |
Huffman, Douglas | 9 |
King, Jean A. | 3 |
Thomas, Kelli | 3 |
Johnson, Kelli | 2 |
Appleton, James J. | 1 |
Bequette, Marjorie | 1 |
Beyer, Marta | 1 |
Cardiel, Christopher L. B. | 1 |
Causey, Lauren | 1 |
Clarkson, Lesa | 1 |
More ▼ |
Publication Type
Journal Articles | 20 |
Reports - Evaluative | 8 |
Reports - Descriptive | 7 |
Reports - Research | 5 |
Information Analyses | 1 |
Education Level
Higher Education | 4 |
Elementary Secondary Education | 3 |
Adult Education | 2 |
Postsecondary Education | 1 |
Audience
Researchers | 1 |
Location
Minnesota | 1 |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Grack Nelson, Amy; King, Jean A.; Lawrenz, Frances; Reich, Christine; Bequette, Marjorie; Pattison, Scott; Kunz Kollmann, Elizabeth; Illes, Molly; Cohn, Sarah; Iacovelli, Stephanie; Cardiel, Christopher L. B.; Ostgaard, Gayra; Goss, Juli; Beyer, Marta; Causey, Lauren; Sinkey, Anne; Francisco, Melanie – American Journal of Evaluation, 2019
While evaluation capacity building (ECB) may hold promise for fostering evaluation, little is known about how it is operationalized within a network. This article presents initial findings from a National Science Foundation-funded research project (Complex Adaptive Systems as a Model for Network Evaluations) that used concepts from complex…
Descriptors: Capacity Building, Networks, Evaluation Methods, Systems Approach
Lawrenz, Frances; Thao, Mao; Johnson, Kelli – Evaluation and Program Planning, 2012
Site visits are used extensively in a variety of settings within the evaluation community. They are especially common in making summative value decisions about the quality and worth of research programs/centers. However, there has been little empirical research and guidance about how to appropriately conduct evaluative site visits of research…
Descriptors: Research and Development Centers, Universities, Federal Programs, Evaluation
Lawrenz, Frances; King, Jean A.; Ooms, Ann – New Directions for Evaluation, 2011
A cross-case analysis of four National Science Foundation (NSF) case studies identified both unique details and common themes related to promoting the use and influence of multisite evaluations. The analysis provided evidence of diverse evaluation use by stakeholders and suggested that people taking part in the multisite evaluations perceived…
Descriptors: Role Perception, Participation, Case Studies, Program Evaluation
Greenseid, Lija O.; Lawrenz, Frances – New Directions for Evaluation, 2011
A team at the University of Minnesota conducted the Collaboratives for Excellence in Teacher Preparation (CETP) core evaluation between 1999 and 2004. The purpose of the CETP core evaluation was to achieve consensus among CETP project leaders and project evaluators on evaluation questions; to develop, pilot, and field test evaluation instruments…
Descriptors: Educational Research, Teacher Education, Program Evaluation, Evaluators
Hanssen, Carl E.; Lawrenz, Frances; Dunet, Diane O. – American Journal of Evaluation, 2008
Meta-evaluations reported in the literature, although rare, often have focused on retrospective assessment of completed evaluations. Conducting a meta-evaluation concurrently with the evaluation modifies this approach. This method provides the opportunity for the meta-evaluators to advise the evaluators and provides the basis for a summative…
Descriptors: Evaluation Methods, Evaluators, Standards, Summative Evaluation
Toal, Stacie A.; King, Jean A.; Johnson, Kelli; Lawrenz, Frances – Evaluation and Program Planning, 2009
As the number of large federal programs increases, so, too, does the need for a more complete understanding of how to conduct evaluations of such complex programs. The research literature has documented the benefits of stakeholder participation in smaller-scale program evaluations. However, given the scope and diversity of projects in multi-site…
Descriptors: Evaluators, Program Evaluation, Federal Programs, Stakeholders
Lawrenz, Frances; Thomas, Kelli; Huffman, Douglas; Clarkson, Lesa Covington – Canadian Journal of Program Evaluation, 2008
The purpose of this article is to describe evaluation capacity building using an immersion approach in two schools: one with an administrator-led process and one with a teacher-led process. The descriptions delineate conceptual, developmental, and sustainability aspects of capacity building through the perspectives of the teachers, principals, and…
Descriptors: Evaluation Methods, Teaching Methods, Administrators, Evaluation
Huffman, Douglas; Thomas, Kelli; Lawrenz, Frances – American Journal of Evaluation, 2008
The purpose of this article is to describe a new collaborative immersion approach for developing evaluation capacity that was used in kindergarten through Grade 12 (K-12) schools and to place this new approach on a continuum of existing capacity-building methods. The continuum extends from individualistic training-oriented methods to collaborative…
Descriptors: Elementary Secondary Education, Evaluation Methods, Accountability, Cooperation

Lawrenz, Frances; Huffman, Douglas – American Journal of Evaluation, 2002
Proposes a new metaphorical framework for understanding and using mixed methods evaluation. Uses the archipelago as a metaphor for resolving the challenges presented by a mixed methods approach, this allowing the simultaneous consideration of different methods and stances. (SLD)
Descriptors: Evaluation Methods, Metaphors, Models
Lawrenz, Frances; Gullickson, Arlen; Toal, Stacie – American Journal of Evaluation, 2007
Use of evaluation findings is a valued outcome for most evaluators. However, to optimize use, the findings need to be disseminated to potential users in formats that facilitate use of the information. This reflective case narrative uses a national evaluation of a multisite National Science Foundation (NSF) program as the setting for describing the…
Descriptors: Evaluators, Audiences, Strategic Planning, Information Dissemination
Lawrenz, Frances; Huffman, Douglas; McGinnis, J. Randy – New Directions for Evaluation, 2007
In this article, the authors examine evaluation process use in a large multilevel, multisite core evaluation. This case highlights the nature of the evaluation processes of the Collaboratives for Excellence in Teacher Preparation (CETP) core evaluation in terms of four categories: (1) social processing; (2) engagement; (3) involvement; and (4)…
Descriptors: Evaluators, Evaluation Methods, Measurement Techniques, Evaluation Research
Lawrenz, Frances; Keiser, Nanette; Lavoie, Bethann – American Journal of Evaluation, 2003
Site visits are a commonly employed, but little discussed, evaluation procedure. Our purpose is to review the state of the art regarding site visits, as well as to catalyze a discussion of site visits centering on the question of whether or not existing practices constitute a set of methods or a methodology. We define evaluative site visits and…
Descriptors: Evaluation Methods, Program Effectiveness, Research Methodology, School Visitation

Lin, Huann Shyang; Lawrenz, Frances – Science Education, 1999
Time-series design is useful for monitoring student learning and assessing teaching effectiveness. Reports that time-series data reveal a sharp drift of the learning curve in the treatment stage and show high correlations with established tests and discrimination between high and low achievers. Contains 32 references. (Author/WRM)
Descriptors: Chemistry, Evaluation Methods, Evaluation Problems, Evaluation Research

Lawrenz, Frances; Huffman, Douglas; Welch, Wayne – Journal of Research in Science Teaching, 2000
Compares the costs of four assessment formats: (1) multiple choice; (2) open ended; (3) laboratory station; and (4) full investigation. Tracks the amount of time spent preparing the devices, developing scoring consistency for the devices, and scoring the devices as they were developed. Compares times as if 1,000 students completed each assessment.…
Descriptors: Academic Achievement, Cost Effectiveness, Evaluation Methods, High Schools
Lawrenz, Frances; Huffman, Douglas – New Directions for Evaluation, 2006
Nationally, there is continuing debate about appropriate methods for conducting educational evaluations. The U.S. Department of Education has placed a priority on "scientifically" based evaluation methods and has advocated a "gold standard" of randomized controlled experimentation. The priority suggests that randomized control methods are best,…
Descriptors: Quasiexperimental Design, Information Technology, Educational Research, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1 | 2