NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)9
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 49 results Save | Export
Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon – American Journal of Evaluation, 2018
To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…
Descriptors: Bayesian Statistics, Evaluation Methods, Statistical Analysis, Hypothesis Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2017
The What Works Clearinghouse (WWC) evaluates research studies that look at the effectiveness of education programs, products, practices, and policies, which the WWC calls "interventions." Many studies of education interventions make claims about impacts on students' outcomes. Some studies have designs that enable readers to make causal…
Descriptors: Program Design, Program Development, Program Effectiveness, Program Evaluation
Alkin, Marvin C. – Guilford Publications, 2010
Written in a refreshing conversational style, this text thoroughly prepares students, program administrators, and new evaluators to conduct evaluations or to use them in their work. The book's question-driven focus and clear discussions about the importance of fostering evaluation use by building collaborative relationships with stakeholders set…
Descriptors: Evaluators, Evaluation Methods, Stakeholders, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Brandon, Paul R.; Young, Donald B.; Pottenger, Francis M.; Taum, Alice K. – International Journal of Science and Mathematics Education, 2009
Instruments for evaluating the implementation of inquiry science in K-12 classrooms are necessary if evaluators and researchers are to know the extent to which programs are implemented as intended and the extent to which inquiry science teaching accounts for student learning. For evaluators and researchers to be confident about the quality of…
Descriptors: Evaluators, Elementary Secondary Education, Knowledge Base for Teaching, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Dekle, Dawn J.; Leung, Denis H. Y.; Zhu, Min – Psychological Methods, 2008
Across many areas of psychology, concordance is commonly used to measure the (intragroup) agreement in ranking a number of items by a group of judges. Sometimes, however, the judges come from multiple groups, and in those situations, the interest is to measure the concordance between groups, under the assumption that there is some within-group…
Descriptors: Item Response Theory, Statistical Analysis, Psychological Studies, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Cooksy, Leslie J. – American Journal of Evaluation, 2007
All evaluators face the challenge of striving to adhere to the highest possible standards of ethical conduct. Translating the AEA's Guiding Principles and the Joint Committee's Program Evaluation Standards into everyday practice, however, can be a complex, uncertain, and frustrating endeavor. Moreover, acting in an ethical fashion can require…
Descriptors: Program Evaluation, Evaluators, Ethics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Lawrenz, Frances; Gullickson, Arlen; Toal, Stacie – American Journal of Evaluation, 2007
Use of evaluation findings is a valued outcome for most evaluators. However, to optimize use, the findings need to be disseminated to potential users in formats that facilitate use of the information. This reflective case narrative uses a national evaluation of a multisite National Science Foundation (NSF) program as the setting for describing the…
Descriptors: Evaluators, Audiences, Strategic Planning, Information Dissemination
Aas, Gro Hanne; Askling, Berit; Dittrich, Karl; Froestad, Wenche; Haug, Peder; Lycke, Kirsten Hofgaard; Moitus, Sirpa; Pyykko, Riitta; Sorskar, Anne Karine – ENQA (European Association for Quality Assurance in Higher Education), 2009
This report is a product of an European Association for Quality Assurance in Higher Education (ENQA) Workshop "Assessing educational quality: Knowledge production and the role of experts" hosted by the Norwegian Agency for Quality Assurance in Education (NOKUT) in Oslo in February, 2008. The workshop gathered representatives from higher…
Descriptors: Higher Education, Educational Quality, Quality Control, Workshops
Coalition for Evidence-Based Policy, 2007
The purpose of this Guide is to advise researchers, policymakers, and others on when it is possible to conduct a high-quality randomized controlled trial in education at reduced cost. Well-designed randomized controlled trials are recognized as the gold standard for evaluating the effectiveness of an intervention (i.e., program or practice) in…
Descriptors: Costs, Scores, Data, Research Design
Cuthbert, Marlene – 1984
Because human beings are different and are necessarily subjective, and because evaluation always involves the human factor, many problems arise. The problems may be compounded in third world settings because they have been studied less, and are less understood, and because evaluators are often from outside the setting and therefore bring…
Descriptors: Developing Nations, Evaluation Methods, Evaluators, Foreign Countries
Patton, Michael Quinn – 1985
This paper reviews what has been learned about evaluation ultilization during the past 20 years. Evaluation utilization is discussed in terms of what is used, who uses evaluation, when evaluation is used, how evaluation is used, where evaluation is used, and why evaluation is used. It is suggested that the personal factor - the interests and…
Descriptors: Evaluation, Evaluation Methods, Evaluation Needs, Evaluation Utilization
Fillos, Rita M.; Manger, Katherine M. – 1984
Brief case studies of three projects are presented to illustrate the steps which open "Wonderful Programs" to evaluation research. The evaluator is provided with the perspective needed to believe the evaluation is worth doing. The project staff is then redirected to the need for a new type of evidence. An ongoing review of the commitment…
Descriptors: Communication Problems, Evaluation Methods, Evaluation Needs, Evaluators
Sherwood-Fabre, Liese – 1986
This paper examines the concepts of program monitoring and program evaluation in the literature, and offers working definitions based on two dimensions of measurement: focus (what questions are addressed) and timing (how often the measures are taken). Focus can be on inputs to the program or outcomes from it; timing can be one-shot or continuous.…
Descriptors: Evaluation Methods, Evaluators, Formative Evaluation, Program Administration
Peer reviewed Peer reviewed
Lincoln, Yvonna Seossions – Evaluation Practice, 1991
The various arts and sciences that comprise the field of program evaluation are discussed. It is argued that emphasis on rigor and expressive content has left other aspects of evaluation unexplored. Educational evaluators need to consider what programs mean and how they contribute to understanding. (SLD)
Descriptors: Evaluation Methods, Evaluators, Program Effectiveness, Program Evaluation
Hoffman, Lee McGraw; And Others – 1984
Examples from the evaluation of a program in which data collection systems were developed jointly by the program's staff and evaluators are described. The Louisiana SPUR (Special Plan Upgrading Reading) Project was evaluated by the Louisiana Department of Education Bureau of Evaluation. SPUR involves 63 of the state's 66 public school systems and…
Descriptors: Data Collection, Databases, Elementary Education, Evaluation Methods
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4