NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)62
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 66 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Edmonstone, John – Action Learning: Research and Practice, 2015
The paper examines the benefits claimed for action learning at individual, organisational and inter-organisational levels. It goes on to identify both generic difficulties in evaluating development programmes and action learning specifically. The distinction between formative and summative evaluation is considered and a summative evaluation…
Descriptors: Experiential Learning, Adult Education, Program Evaluation, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Stempler, Amy F.; Polger, Mark Aaron – Public Services Quarterly, 2013
Signage represents more than directions or policies; it is informational, promotional, and sets the tone of the environment. To be effective, signage must be consistent, concise, and free of jargon and punitive language. An efficient assessment of signage should include a complete inventory of existing signage, including an analysis of the types…
Descriptors: Best Practices, Signs, Guidelines, Library Policy
Peer reviewed Peer reviewed
Direct linkDirect link
Chelimsky, Eleanor – American Journal of Evaluation, 2013
In this paper, the author argues that evaluation theory and practice interact insufficiently today, even though early evaluation theorists expected them to be closely intertwined. She views this limited connection as the result of differing interests on the part of theorists and practitioners, differing frequencies of dissemination, and differing…
Descriptors: Evaluation Methods, Educational Practices, Educational Theories, Theory Practice Relationship
Peer reviewed Peer reviewed
Direct linkDirect link
Gegenfurtner, Andreas; Siewiorek, Anna; Lehtinen, Erno; Saljo, Roger – Vocations and Learning, 2013
Understanding how best to assess expertise, the situational variations of expertise, and distinctive qualities of expertise that arises from particular workplace experiences, presents an important challenge. Certainly, at this time, there is much interest in identifying standard occupational measures and competences, which are not well aligned…
Descriptors: Workplace Learning, Research Methodology, Expertise, Educational Practices
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Beltrán-Palanques, Vicente – Research-publishing.net, 2016
Assessing pragmatic knowledge in the instructed setting is seen as a complex but necessary task, which requires the design of appropriate research methodologies to examine pragmatic performance. This study discusses the use of two different research methodologies, namely those of Discourse Completion Tests/Tasks (DCTs) and verbal reports. Research…
Descriptors: Pragmatics, Second Language Learning, Second Language Instruction, Trust (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Braverman, Marc T. – American Journal of Evaluation, 2013
Sound evaluation planning requires numerous decisions about how constructs in a program theory will be translated into measures and instruments that produce evaluation data. This article, the first in a dialogue exchange, examines how decisions about measurement are (and should be) made, especially in the context of small-scale local program…
Descriptors: Evaluation Methods, Methods Research, Research Methodology, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Storberg-Walker, Julia – Human Resource Development Review, 2012
This "Instructor's Corner" describes a step forward on the journey to write, review, and publish high-quality qualitative research manuscripts. This article examines two existing perspectives on generating high-quality qualitative manuscripts and then compares and contrasts the different elements of each. First, an overview of Rocco's (2010) eight…
Descriptors: Qualitative Research, Research Methodology, Faculty Publishing, Writing for Publication
Peer reviewed Peer reviewed
Direct linkDirect link
Rhodes, William – Evaluation Review, 2012
Research synthesis of evaluation findings is a multistep process. An investigator identifies a research question, acquires the relevant literature, codes findings from that literature, and analyzes the coded data to estimate the average treatment effect and its distribution in a population of interest. The process of estimating the average…
Descriptors: Social Sciences, Regression (Statistics), Meta Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff – Career and Technical Education Research, 2012
Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…
Descriptors: Vocational Education, Effect Size, Intervals, Self Esteem
Peer reviewed Peer reviewed
Direct linkDirect link
Cooksy, Leslie J.; Mark, Melvin M. – American Journal of Evaluation, 2012
Attention to evaluation quality is commonplace, even if sometimes implicit. Drawing on her 2010 Presidential Address to the American Evaluation Association, Leslie Cooksy suggests that evaluation quality depends, at least in part, on the intersection of three factors: (a) evaluator competency, (b) aspects of the evaluation environment or context,…
Descriptors: Competence, Context Effect, Educational Resources, Educational Quality
Peer reviewed Peer reviewed
Direct linkDirect link
Salthouse, Timothy A. – Psychological Bulletin, 2011
The commentaries on my article contain a number of points with which I disagree but also several with which I agree. For example, I continue to believe that the existence of many cases in which between-person variability does not increase with age indicates that greater variance with increased age is not inevitable among healthy individuals up to…
Descriptors: Cognitive Processes, Inferences, Research Methodology, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Gofine, Miriam – Writing Center Journal, 2012
Today, nearly forty years after writing centers first began to proliferate, it is worthwhile to reflect on the themes that emerge from empirical material surrounding writing center assessments, for reflecting on these themes may help administrators to refine current assessment practices and scholars to redirect their research. The goal of this…
Descriptors: Laboratories, Educational Research, Research Methodology, Research Needs
Peer reviewed Peer reviewed
Direct linkDirect link
Marsden, Emma; Torgerson, Carole J. – Oxford Review of Education, 2012
This article provides two illustrations of some of the factors that can influence findings from pre- and post-test research designs in evaluation studies, including regression to the mean (RTM), maturation, history and test effects. The first illustration involves a re-analysis of data from a study by Marsden (2004), in which pre-test scores are…
Descriptors: Quasiexperimental Design, Research Methodology, Research Design, Pretests Posttests
Peer reviewed Peer reviewed
Direct linkDirect link
Coryn, Chris L. S.; Noakes, Lindsay A.; Westine, Carl D.; Schroter, Daniela C. – American Journal of Evaluation, 2011
Although the general conceptual basis appeared far earlier, theory-driven evaluation came to prominence only a few decades ago with the appearance of Chen's 1990 book "Theory-Driven Evaluations." Since that time, the approach has attracted many supporters as well as detractors. In this paper, 45 cases of theory-driven evaluations, published over a…
Descriptors: Evidence, Program Evaluation, Educational Practices, Literature Reviews
Peer reviewed Peer reviewed
Direct linkDirect link
Guerci, Marco; Vinante, Marco – Journal of European Industrial Training, 2011
Purpose: In recent years, the literature on program evaluation has examined multi-stakeholder evaluation, but training evaluation models and practices have not generally taken this problem into account. The aim of this paper is to fill this gap. Design/methodology/approach: This study identifies intersections between methodologies and approaches…
Descriptors: Evaluation Needs, Program Evaluation, Foreign Countries, Evaluation Methods
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5