NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lortie-Forgues, Hugues; Inglis, Matthew – Educational Researcher, 2019
In this response, we first show that Simpson's proposed analysis answers a different and less interesting question than ours. We then justify the choice of prior for our Bayes factors calculations, but we also demonstrate that the substantive conclusions of our article are not substantially affected by varying this choice.
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Educational Researcher, 2019
A recent paper uses Bayes factors to argue a large minority of rigorous, large-scale education RCTs are "uninformative." The definition of "uninformative" depends on the authors' hypothesis choices for calculating Bayes factors. These arguably overadjust for effect size inflation and involve a fixed prior distribution,…
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Peer reviewed Peer reviewed
Slavin, Robert E. – Educational Researcher, 1999
Comments on S. Pogrow's suggestion that educational programs should be judged by the degree to which schools using them achieve "surprising scores." Emphasizes the importance of the use of control groups in program evaluation. (SLD)
Descriptors: Control Groups, Educational Research, Evaluation Methods, Experimental Groups
Peer reviewed Peer reviewed
Rist, Ray C. – Educational Researcher, 1980
Federal support for answers to the question "What is really going on out there?" has increased interest in the use of ethnography as a research method for evaluation of educational programs. Pros and cons of the use of ethnography for educational research are discussed. (PR)
Descriptors: Educational Research, Ethnography, Evaluation Methods, Field Studies
Peer reviewed Peer reviewed
Pogrow, Stanley – Educational Researcher, 1998
Explores issues in evaluating exemplary programs, including the measurement of the degree to which the program is exemplary or promising. Suggests that comparison (control) group analysis can mislead researchers about the effectiveness of a program and suggests the use of gain scores to identify exemplary programs. (SLD)
Descriptors: Achievement Gains, Control Groups, Evaluation Methods, Experimental Groups
Peer reviewed Peer reviewed
Evans, John W. – Educational Researcher, 1974
Although important progress has been made in educational evaluation during the past decade new problems believed to be more serious threaten the progress that has been made. These problems are said to be a mixture of logistics and politics and are in large part an outgrowth of the increasing pluralism of American society. (Author/AM)
Descriptors: Educational Assessment, Evaluation Methods, Evaluation Needs, Measurement Techniques
Peer reviewed Peer reviewed
Boruch, Robert F.; Cordray, David S. – Educational Researcher, 1981
Answers criticisms by Mary Anne Bunda, Ernest House, and Mary Kennedy. Discusses, among other issues, legal authorization for the use of higher quality evaluation designs and the use of evaluation research results. (GC)
Descriptors: Compliance (Legal), Elementary Secondary Education, Evaluation Methods, Federal Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Stuart, Elizabeth A. – Educational Researcher, 2007
Education researchers, practitioners, and policymakers alike are committed to identifying interventions that teach students more effectively. Increased emphasis on evaluation and accountability has increased desire for sound evaluations of these interventions; and at the same time, school-level data have become increasingly available. This article…
Descriptors: Research Methodology, Computation, Causal Models, Intervention
Peer reviewed Peer reviewed
Popham, W. James; Carlson, Dale – Educational Researcher, 1977
Six deficits of this model are discussed: disparity in proponent prowess, fallible arbiters, excessive confidence in the model's potency, difficulties in framing the proposition in a manner amenable to adversary resolution, "a cat's-paw for biased decision makers," and, excessive costs. (Author/JM)
Descriptors: Accountability, Case Studies, Educational Programs, Evaluation Criteria
Peer reviewed Peer reviewed
Carter, Launor – Educational Researcher, 1977
Presents a case study of how the review and clearance procedures are operating for evaluation instruments used in collecting information in connection with studies for the U.S. Office of Education. The review procedures are defective: They consume too much time. They are very costly. The results are unproductive. (Author/JM)
Descriptors: Administrative Policy, Educational Programs, Evaluation Methods, Evaluation Needs
Peer reviewed Peer reviewed
Bunda, Mary Anne – Educational Researcher, 1981
Two issues in the Holtzman Project bear examination and reconsideration: (1) the recommendation for the use of field experiments as an exclusive "authorized" evaluation design, and (2) the focus on the needs of the federal client, and the apparent lack of concern for clients at the state and local levels. (GC)
Descriptors: Elementary Secondary Education, Evaluation Methods, Federal Programs, Information Utilization
Peer reviewed Peer reviewed
Boruch, Robert F.; And Others – Educational Researcher, 1981
Summarizes a report from Northwestern University that reviewed Federal evaluation practices of federally supported educational programs at the national, state, and local levels. Considers: (1) evaluation purposes and methods; (2) evaluators' capabilities; (3) use of evaluation results; and (4) ways that evaluation procedures/practices can be…
Descriptors: Elementary Secondary Education, Evaluation Methods, Evaluators, Federal Legislation
Peer reviewed Peer reviewed
Dickinson, David K. – Educational Researcher, 2003
Responds to a critique of this author's article on consideration of purpose and intended use when making evaluations of assessments, commenting on three areas from that critique that warrant further discussion: considering the uses of assessment tools; global quality versus content specificity; and creating a toolkit of measures for examining…
Descriptors: Academic Standards, Early Childhood Education, Emergent Literacy, Evaluation Methods
Peer reviewed Peer reviewed
House, Ernest R. – Educational Researcher, 1990
Discusses the following aspects of educational evaluation: (1) structural changes; (2) conceptual changes; (3) mixed methods and unraveling consensus; (4) utilization of findings; (5) the role of values; and (6) the role of politics. Discusses how evaluation moved from monolithic to pluralistic, reflecting the change from consensus to pluralism in…
Descriptors: Concept Formation, Cultural Pluralism, Evaluation Methods, Evaluation Problems
Peer reviewed Peer reviewed
Lambert, Richard G. – Educational Researcher, 2003
Notes specific concerns with Dickinson's (2002) assertion that the processes used to measure the quality of out-of-home care for young children have begun to lag behind recent developments in the field and that there is a lag in the National Association for the Education of Young Children's accreditation standards. Questions the usefulness of…
Descriptors: Academic Standards, Early Childhood Education, Emergent Literacy, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1  |  2