NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
Assessments and Surveys
National Assessment of…2
What Works Clearinghouse Rating
Showing 1 to 15 of 47 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lars König; Steffen Zitzmann; Tim Fütterer; Diego G. Campos; Ronny Scherer; Martin Hecht – Research Synthesis Methods, 2024
Several AI-aided screening tools have emerged to tackle the ever-expanding body of literature. These tools employ active learning, where algorithms sort abstracts based on human feedback. However, researchers using these tools face a crucial dilemma: When should they stop screening without knowing the proportion of relevant studies? Although…
Descriptors: Artificial Intelligence, Psychological Studies, Researchers, Screening Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Grit Laudel – Research Evaluation, 2024
Researchers' notions of research quality depend on their field of research. Previous studies have shown that field-specific assessment criteria exist but could explain neither why these specific criteria and not others exist, nor how criteria are used in specific assessment situations. To give initial answers to these questions, formal assessment…
Descriptors: Researchers, Experimenter Characteristics, Intellectual Disciplines, Quality Circles
Peer reviewed Peer reviewed
Direct linkDirect link
Braverman, Marc T. – American Journal of Evaluation, 2013
Sound evaluation planning requires numerous decisions about how constructs in a program theory will be translated into measures and instruments that produce evaluation data. This article, the first in a dialogue exchange, examines how decisions about measurement are (and should be) made, especially in the context of small-scale local program…
Descriptors: Evaluation Methods, Methods Research, Research Methodology, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Phillips, Gary W. – Applied Measurement in Education, 2015
This article proposes that sampling design effects have potentially huge unrecognized impacts on the results reported by large-scale district and state assessments in the United States. When design effects are unrecognized and unaccounted for they lead to underestimating the sampling error in item and test statistics. Underestimating the sampling…
Descriptors: State Programs, Sampling, Research Design, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Shukla, Archana; Chaudhary, Banshi D. – Education and Information Technologies, 2014
The quality of evaluation of essay type answer books involving multiple evaluators for courses with large number of enrollments is likely to be affected due to heterogeneity in experience, expertise and maturity of evaluators. In this paper, we present a strategy to detect anomalies in evaluation of essay type answers by multiple evaluators based…
Descriptors: Essays, Grading, Educational Strategies, Educational Quality
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Msila, Vuyisile; Setlhako, Angeline – Universal Journal of Educational Research, 2013
Carol Weiss did much to enhance the role of evaluation in her writings. Her work shows evaluators what affects their roles as they evaluate programs. Furthermore, her theory of change spells out the complexities involved in program evaluation. There are various processes involved in the evaluation of programs. The paper looks at some of the…
Descriptors: Program Evaluation, Evaluation Methods, Evaluation Research, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Cooksy, Leslie J.; Mark, Melvin M. – American Journal of Evaluation, 2012
Attention to evaluation quality is commonplace, even if sometimes implicit. Drawing on her 2010 Presidential Address to the American Evaluation Association, Leslie Cooksy suggests that evaluation quality depends, at least in part, on the intersection of three factors: (a) evaluator competency, (b) aspects of the evaluation environment or context,…
Descriptors: Competence, Context Effect, Educational Resources, Educational Quality
Peer reviewed Peer reviewed
Direct linkDirect link
Azzam, Tarek – American Journal of Evaluation, 2011
This study addresses the central question "How do evaluators' background characteristics relate to their evaluation design choices?" Evaluators were provided with a fictitious description of a school-based program and asked to design an evaluation of that program. Relevant background characteristics such as level of experience,…
Descriptors: Evaluators, Program Evaluation, Evaluation Utilization, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Noland, Carey M. – Journal of Research Practice, 2012
When conducting research on sensitive topics, it is challenging to use new methods of data collection given the apprehensions of Institutional Review Boards (IRBs). This is especially worrying because sensitive topics of research often require novel approaches. In this article a brief personal history of navigating the IRB process for conducting…
Descriptors: Communication Research, Sexuality, Social Science Research, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Scriven, Michael – Journal of MultiDisciplinary Evaluation, 2011
In this paper, the author considers certain aspects of the problem of obtaining unbiased information about the merits of a program or product, whether for purposes of decision making or for accountability. The evaluation of personnel, as well as the evaluation of proposals and evaluations, generally involves a different set of problems than those…
Descriptors: Program Evaluation, Evaluation Methods, Test Bias, Personnel Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Azzam, Tarek – American Journal of Evaluation, 2010
A simulation study was conducted in an attempt to examine how evaluators modify their evaluation design in response to differing stakeholder groups. In this study, evaluators were provided with a fictitious description of a school-based program. They were then asked to design an evaluation of the program. After the evaluation design decisions were…
Descriptors: Feedback (Response), Evaluators, Program Evaluation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Stufflebeam, Daniel L. – Journal of MultiDisciplinary Evaluation, 2011
Good evaluation requires that evaluation efforts themselves be evaluated. Many things can and often do go wrong in evaluation work. Accordingly, it is necessary to check evaluations for problems such as bias, technical error, administrative difficulties, and misuse. Such checks are needed both to improve ongoing evaluation activities and to assess…
Descriptors: Program Evaluation, Evaluation Criteria, Evaluation Methods, Definitions
Peer reviewed Peer reviewed
Direct linkDirect link
Hofer, Kerry G. – Contemporary Issues in Early Childhood, 2010
This project involved examining the most widely used instrument designed to evaluate the quality of early learning environments, the Early Childhood Environment Rating Scale-Revised Edition (ECERS-R). There are many aspects related to the way that the ECERS-R is used in practice that can vary from one observation to the next. The method in which…
Descriptors: Rating Scales, Measurement Techniques, Program Validation, Experimenter Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Eddy, Rebecca M.; Berry, Tiffany – New Directions for Evaluation, 2008
The field of evaluation faces a number of serious challenges in light of No Child Left Behind legislation, among them feasibility, resources, and blurring lines among research, evaluation, and assessment. At the same time, these challenges open the door for opportunities in evaluation. Now more than ever, the expertise of evaluators is needed and…
Descriptors: Evaluators, Federal Legislation, Evaluation Methods, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Toal, Stacie A.; King, Jean A.; Johnson, Kelli; Lawrenz, Frances – Evaluation and Program Planning, 2009
As the number of large federal programs increases, so, too, does the need for a more complete understanding of how to conduct evaluations of such complex programs. The research literature has documented the benefits of stakeholder participation in smaller-scale program evaluations. However, given the scope and diversity of projects in multi-site…
Descriptors: Evaluators, Program Evaluation, Federal Programs, Stakeholders
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4