NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)2
Source
Evaluation Review33
Education Level
Adult Education1
Audience
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Merrall, Elizabeth L. C.; Dhami, Mandeep K.; Bird, Sheila M. – Evaluation Review, 2010
The determinants of sentencing are of much interest in criminal justice and legal research. Understanding the determinants of sentencing decisions is important for ensuring transparent, consistent, and justifiable sentencing practice that adheres to the goals of sentencing, such as the punishment, rehabilitation, deterrence, and incapacitation of…
Descriptors: Research Design, Research Methodology, Court Litigation, Social Justice
Peer reviewed Peer reviewed
Cook, Thomas J. – Evaluation Review, 1983
Comments on the delayed treatment design are offered as an amendment to the Heath et al. discussion by pointing out two important assumptions of their recommended design, and tracing through implications of those assumptions for the interpretation of treatment effects. (Author/PN)
Descriptors: Age, Evaluation Methods, Power (Statistics), Research Design
Peer reviewed Peer reviewed
Sawyer, Darwin O.; Maney, Ann – Evaluation Review, 1981
The impact of a legal reform on child abuse reporting in the District of Columbia is evaluated. Results suggest that reporting legislation may be useful in reform efforts by extending the requirements to new groups and precipitating efforts to implement legal requirements of those already required to report. (Author/GK)
Descriptors: Child Abuse, Evaluation Methods, Legal Responsibility, Legislation
Peer reviewed Peer reviewed
Varnell, Sherri P.; Murray, David M.; Baker, William L. – Evaluation Review, 2001
Studied the analytic problems associated with a design in which one identifiable group is allocated to each treatment condition and members of these groups are measured to assess the intervention. Results from a simulation study underscore the analytic problems associated with these quasi-experimental or group-randomized designs. (SLD)
Descriptors: Data Analysis, Evaluation Methods, Groups, Intervention
Peer reviewed Peer reviewed
Bloom, Howard S. – Evaluation Review, 2002
Introduces an new approach for measuring the impact of whole school reforms. The approach, based on "short" interrupted time-series analysis, is explained, its statistical procedures are outlined, and how it was used in the evaluation of a major whole-school reform, Accelerated Schools is described (H. Bloom and others, 2001). (SLD)
Descriptors: Educational Change, Elementary Education, Evaluation Methods, Research Design
Peer reviewed Peer reviewed
Hedrick, Terry E.; Shipman, Stephanie L. – Evaluation Review, 1988
Changes made in 1981 to the Aid to Families with Dependent Children (AFDC) program under the Omnibus Budget Reconciliation Act were evaluated. Multiple quasi-experimental designs (interrupted time series, non-equivalent comparison groups, and simple pre-post designs) used to address evaluation questions illustrate the issues faced by evaluators in…
Descriptors: Evaluation Methods, Program Evaluation, Quasiexperimental Design, Research Design
Peer reviewed Peer reviewed
Heath, Linda; And Others – Evaluation Review, 1982
A problem for program evaluators involves a search for ways to maximize internal validity and inferential power of research designs while being able to assess long-term effects of social programs. A multimethodological research strategy combining a delayed control group true experiment with a multiple time series and switching replications design…
Descriptors: Control Groups, Evaluation Methods, Intervention, Program Evaluation
Peer reviewed Peer reviewed
Bloom, Howard S. – Evaluation Review, 1995
A simple way to assess the statistical power of experimental designs, based on the concept of a minimum detectable effect, is described. How to compute minimum detectable effects and how to apply the method of assessment of alternative experimental designs are illustrated. (SLD)
Descriptors: Estimation (Mathematics), Evaluation Methods, Experiments, Power (Statistics)
Peer reviewed Peer reviewed
Chelimsky, Eleanor – Evaluation Review, 1985
Four aspects of the relationship between auditing and evaluation in their approaches to program assessment are examined: (1) their different origins; (2) the definitions and purposes of both, and the questions they seek to answer; (3) contrasting viewpoints and emphases of auditors and evaluators; and (4) commonalities of interest and potential…
Descriptors: Accountability, Accounting, Data Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Mandell, Marvin B.; Bretschneider, Stuart I. – Evaluation Review, 1984
The authors demonstrate how exponential smoothing can play a role in the identification of the intervention component of an interrupted time-series design model that is analogous to the role that the sample autocorrelation and partial autocorrelation functions serve in the identification of the noise portion of such a model. (Author/BW)
Descriptors: Data Analysis, Evaluation Methods, Graphs, Intervention
Peer reviewed Peer reviewed
St.Pierre, Robert G. – Evaluation Review, 1980
Factors that influence the sample size necessary for longitudinal evaluations include the nature of the evaluation questions, nature of available comparison groups, consistency of the treatment in different sites, effect size, attrition rate, significance level for statistical tests, and statistical power. (Author/GDC)
Descriptors: Evaluation Methods, Field Studies, Influences, Longitudinal Studies
Peer reviewed Peer reviewed
Graham, John W.; And Others – Evaluation Review, 1984
A method is presented that allows multivariate comparability while making only minimal restrictions on randomization. This procedure is demonstrated in the context of assigning 63 aggregated units (schools) to 28 experimental and control conditions. Good comparability of groups for all primary main effects and interactions was verified for 15…
Descriptors: Drug Abuse, Evaluation Methods, Factor Analysis, Multivariate Analysis
Peer reviewed Peer reviewed
Chen, Huey-Tsyh; Rossi, Peter H. – Evaluation Review, 1983
The use of theoretical models in impact assessment can heighten the power of experimental designs and compensate for some deficiencies of quasi-experimental designs. Theoretical models of implementation processes are examined, arguing that these processes are a major obstacle to fully effective programs. (Author/CM)
Descriptors: Evaluation Criteria, Evaluation Methods, Models, Program Evaluation
Peer reviewed Peer reviewed
Heilman, John G. – Evaluation Review, 1980
The choice between experimental research or process-oriented oriented research as the only valid paradigm of evaluation research is rejected. It is argued that there is a middle ground. Suggestions are made for mixing the two approaches to suit particular research settings. (Author/GK)
Descriptors: Evaluation Methods, Evaluative Thinking, Models, Program Evaluation
Peer reviewed Peer reviewed
Willemain, Thomas R.; Hartunian, Nelson S. – Evaluation Review, 1982
Two methods for dividing an interrupted time-series study between baseline and experimental phases when study resources are limited are compared. In fixed designs, the baseline duration is predetermined. In flexible designs the baseline duration is contingent on remaining resources and the match of results to prior expectations of the evaluator.…
Descriptors: Data Collection, Evaluation Methods, Evaluators, Research Design
Previous Page | Next Page ยป
Pages: 1  |  2  |  3