NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)5
Audience
Researchers5
Assessments and Surveys
National Assessment of…1
What Works Clearinghouse Rating
Showing 1 to 15 of 76 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Steenbergen-Hu, Saiying; Olszewski-Kubilius, Paula – Journal of Advanced Academics, 2016
The article by Davis, Engberg, Epple, Sieg, and Zimmer (2010) represents one of the recent research efforts from economists in evaluating the impact of gifted programs. It can serve as a worked example of the implementation of the regression discontinuity (RD) design method in gifted education research. In this commentary, we first illustrate the…
Descriptors: Special Education, Gifted, Identification, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Gersten, Russell; Jayanthi, Madhavi; Dimino, Joseph – Exceptional Children, 2017
The report of the national response to intervention (RTI) evaluation study, conducted during 2011-2012, was released in November 2015. Anyone who has read the lengthy report can attest to its complexity and the design used in the study. Both these factors can influence the interpretation of the results from this evaluation. In this commentary, we…
Descriptors: Response to Intervention, National Programs, Program Effectiveness, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Greene, Jennifer C.; Lipsey, Mark W.; Schwandt, Thomas A.; Smith, Nick L.; Tharp, Roland G. – New Directions for Evaluation, 2007
Productive dialogue is informed best by multiple and diverse voices. Five seasoned evaluators, representing a range of evaluation perspectives, offer their views in two- to three-page discussant contributions. These individuals were asked to reflect and comment on the previous chapters in the spirit of critical review as a key source of evidence…
Descriptors: Evaluators, Evaluation Methods, Research Design, Inquiry
Peer reviewed Peer reviewed
Direct linkDirect link
Lam, Tony C. M. – American Journal of Evaluation, 2009
D'Eon et al. concluded that change in performance self-assessment means from before to after a workshop can detect workshop success in their and other situations. In this commentary, their recommendation is refuted by showing that (a) self-assessments with balanced over- and underestimations are still biased and should not be used to evaluate…
Descriptors: Workshops, Success, Self Evaluation (Individuals), Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Green, Judith L.; Skukauskaite, Audra – Educational Researcher, 2008
This response to Slavin's article (2008) explores the issues of transparency, representation, and warranting of claims in Slavin's descriptions of the work of others and his suggestions for program evaluation syntheses. Through contrastive analyses between Slavin's representations of the program evaluation synthesis efforts of five organizations…
Descriptors: Constructivism (Learning), Program Evaluation, Critical Reading, Synthesis
Peer reviewed Peer reviewed
Parloff, Morris B. – Journal of Consulting and Clinical Psychology, 1986
Describes problems in implementing placebo controls in psychotherapy including the difficulty of ensuring that therapists and their patients will view both the experimental and placebo treatments as comparably credible. Considers six research issues stemming from the definitional requirement that the placebo control for the critical and specific…
Descriptors: Control Groups, Program Evaluation, Psychotherapy, Research Design
Peer reviewed Peer reviewed
Seligman, Clive; Hutton, R. Bruce – Journal of Social Issues, 1981
Explores the problems involved in conducting program evaluation in the field of energy conservation. Highlights similarities between evaluation research and basic research. Examines stages of evaluation research within the context of energy conservation programs and argues that experimental evaluations are desirable and feasible. (Author/MK)
Descriptors: Energy Conservation, Evaluation Methods, Evaluation Needs, Program Evaluation
Schwandt, Thomas A. – 1981
The terms "rigor" and "relevance" most often surface in discussions of methodological adequacy. Assessing epistemological relevance is equivalent to answering the question, "Is this particular research question worth asking at all?" Epistemological rigor refers to the properties of a "researchable" problem.…
Descriptors: Definitions, Epistemology, Ethnography, Evaluation Methods
Peer reviewed Peer reviewed
Sherrill, Sam – Evaluation and Program Planning: An International Journal, 1984
It is argued that outcome evaluation should include efforts to identify and measure unintended outcomes. A systems perspective is presented which treats governmental actions as disequilibrating intrusions into reacting systems. Both intended and unintended outcomes of such intrusions can be valued monetarily, in terms of human rights, or both.…
Descriptors: Evaluation Methods, Government Role, Intervention, Program Evaluation
Peer reviewed Peer reviewed
Basham, Robert B. – Journal of Consulting and Clinical Psychology, 1986
An examination of the scientific basis of both control group designs and comparative designs in outcome research reveals that comparative designs generally have fewer threats to validity and provide a more efficient means of control for nonspecific treatment factors suggesting that comparative studies warrant a larger role in the study of…
Descriptors: Comparative Analysis, Control Groups, Program Evaluation, Psychotherapy
Robinson, Sharon E. – Improving Human Performance Quarterly, 1979
Encourages the use of evaluation research rather than process, impact, or comprehensive evaluation; and notes that meaningful evaluation research can occur when the true experiment is combined with longitudinal designs. (JEG)
Descriptors: Evaluation Methods, Formative Evaluation, Institutional Evaluation, Organizational Objectives
Cook, Thomas D. – 1981
National Institute of Education (NIE) research priorities indicate the use of panel measures only during the time children are in Follow Through programs and further seem to indicate that only implementation variables should be measured. This paper raises questions about the desirability of the narrow focus implied by the NIE priorities, the…
Descriptors: Early Childhood Education, Program Evaluation, Research Design, Research Methodology
Bergstrom, Joan M.; Reis, Janet – 1979
This paper provides guidlines for formally evaluating extended-day programs. Extended-day programs are defined as those attended before and after school by children between the ages of 5 and 14. A seven step evaluation process, in which the practitioner responsible for program administration plays a key role, is outlined and discussed. (Author/RH)
Descriptors: Administrator Responsibility, After School Day Care, Extended School Day, Program Evaluation
Peer reviewed Peer reviewed
Heilman, John G. – Evaluation Review, 1980
The choice between experimental research or process-oriented oriented research as the only valid paradigm of evaluation research is rejected. It is argued that there is a middle ground. Suggestions are made for mixing the two approaches to suit particular research settings. (Author/GK)
Descriptors: Evaluation Methods, Evaluative Thinking, Models, Program Evaluation
Peer reviewed Peer reviewed
Glass, Norman – Children & Society, 2001
Describes recent developments in United Kingdom politics that have affected development of evidence-based policy for children. Examines the notion of "what works," leading to a suggestion that evaluators be concerned with "what is worth doing" for children. Considers robustness as a guiding principle for evaluation design and…
Descriptors: Evaluation Problems, Foreign Countries, Politics, Program Evaluation
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6