Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 2 |
Descriptor
Hypothesis Testing | 7 |
Program Evaluation | 7 |
Research Problems | 7 |
Evaluation Methods | 6 |
Research Design | 3 |
Bayesian Statistics | 2 |
Criticism | 2 |
Educational Research | 2 |
Elementary Secondary Education | 2 |
Mathematical Models | 2 |
Randomized Controlled Trials | 2 |
More ▼ |
Author
Echternacht, Gary | 1 |
Holley, Freda M. | 1 |
Inglis, Matthew | 1 |
Lee, Ann M. | 1 |
Lortie-Forgues, Hugues | 1 |
Moffitt, Robert | 1 |
Price, Janet | 1 |
Sharp, D. E. Ann | 1 |
Simpson, Adrian | 1 |
Swinton, Spencer | 1 |
Vincent, Pauline | 1 |
More ▼ |
Publication Type
Journal Articles | 3 |
Reports - Research | 3 |
Reports - Evaluative | 2 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Lortie-Forgues, Hugues; Inglis, Matthew – Educational Researcher, 2019
In this response, we first show that Simpson's proposed analysis answers a different and less interesting question than ours. We then justify the choice of prior for our Bayes factors calculations, but we also demonstrate that the substantive conclusions of our article are not substantially affected by varying this choice.
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Simpson, Adrian – Educational Researcher, 2019
A recent paper uses Bayes factors to argue a large minority of rigorous, large-scale education RCTs are "uninformative." The definition of "uninformative" depends on the authors' hypothesis choices for calculating Bayes factors. These arguably overadjust for effect size inflation and involve a fixed prior distribution,…
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation

Price, Janet; Vincent, Pauline – Nursing Outlook, 1976
Descriptors: Evaluation Criteria, Evaluation Methods, Guidelines, Hypothesis Testing
Sharp, D. E. Ann; And Others – 1985
Hypothesis generation and testing is outlined as an additional domain for program evaluators. Program evaluation involves a thorough analysis of the processes that contribute to change (or a lack of change) among program recipients. This process of change is analyzed in two ways: (1) treating programs as naturally occurring field studies; and (2)…
Descriptors: Adults, Data Analysis, Data Interpretation, Evaluators

Moffitt, Robert – Evaluation Review, 1991
Statistical methods for program evaluation with nonexperimental data are reviewed with emphasis on circumstances in which nonexperimental data are valid. Three solutions are proposed for problems of selection bias, and implications for evaluation design and data collection and analysis are discussed. (SLD)
Descriptors: Bias, Cohort Analysis, Equations (Mathematics), Estimation (Mathematics)
Lee, Ann M.; Holley, Freda M. – 1975
The first author set out to design and secure funding for an hypothesis-based program in a public school setting. The natural history of what happened to that study as it proceeded from design, to funding, to actual implementation, to final reporting serves as the case history of two idealistic evaluators' wildest nightmares. (Author)
Descriptors: Compensatory Education, Elementary Secondary Education, Evaluation Methods, Federal Programs
Echternacht, Gary; Swinton, Spencer – 1979
Title I evaluations using the RMC Model C design depend for their interpretation on the assumption that the regression of posttest on pretest is linear across the cut score level when there is no treatment; but there are many instances where nonlinearities may occur. If one applies the analysis of covariance, or model C analysis, large errors may…
Descriptors: Achievement Gains, Analysis of Covariance, Educational Assessment, Elementary Secondary Education