NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Mark, Melvin M. – New Directions for Program Evaluation, 1986
Several validity typologies are overviewed then integrated. The integrated framework is used to contrast the positions of Campbell and Cronback. The current practice and logic of quasi-experimentation is critiqued, and expansions beyond the primary focus of dominant validity typologies are suggested. (BS)
Descriptors: Evaluative Thinking, Generalization, Program Evaluation, Quasiexperimental Design
Peer reviewed Peer reviewed
Cordray, David S. – New Directions for Program Evaluation, 1986
The role of human judgment in the development and synthesis of evidence has not been adequately developed or acknowledged within quasi-experimental analysis. Corrective solutions need to confront the fact that causal analysis within complex environments will require a more active assessment that entails reasoning and statistical modeling.…
Descriptors: Evaluative Thinking, Models, Program Effectiveness, Program Evaluation
Peer reviewed Peer reviewed
Shadish, William R., Jr.; And Others – New Directions for Program Evaluation, 1986
Since usually no defensible option for performing a task within quasi-experimentation is unbiased, it is desirable to select several options that reflect biases in different directions. The benefits of applying a critical multiplism approach to causal hypotheses, group nonequivalence, and units of analysis in quasi-experimentation is discussed.…
Descriptors: Bias, Matched Groups, Program Evaluation, Quasiexperimental Design
Peer reviewed Peer reviewed
Campbell, Donald T. – New Directions for Program Evaluation, 1986
Confusion about the meaning of validity in quasi-experimental research can be addressed by carefully relabeling types of validity. Internal validity can more aptly be termed "local molar causal validity." More tentatively, the "principle of proximal similarity" can be substituted for the concept of external validity. (Author)
Descriptors: Definitions, Quasiexperimental Design, Sampling, Social Science Research
Peer reviewed Peer reviewed
Reichardt, Charles; Gollob, Harry – New Directions for Program Evaluation, 1986
Causal models often omit variables that should be included, use variables that are measured fallibly, and ignore time lags. Such practices can lead to severely biased estimates of effects. The discussion explains these biases and shows how to take them into account. (Author)
Descriptors: Effect Size, Error of Measurement, High Schools, Mathematical Models
Peer reviewed Peer reviewed
Lipsey, Mark W. – New Directions for Program Evaluation, 1993
Explores the role of theory in strengthening causal interpretations in nonexperimental research. Evaluators must conduct theory-driven research, concentrating on "small theory," in that the focus is on the explanation of processes specific to the program being evaluated. Theory-guided treatment research must be programmatic and…
Descriptors: Causal Models, Effect Size, Evaluators, Generalization
Peer reviewed Peer reviewed
McCleary, Richard; Riggs, James E. – New Directions for Program Evaluation, 1982
Time series analysis is applied to an assessment of the temporary and permanent impact of the 1975 Australian Family Law Act and its effect on number of divorces. The application and construct validity of the model is examined. (Author/PN)
Descriptors: Court Litigation, Demography, Divorce, Evaluation Methods
Peer reviewed Peer reviewed
Conrad, Kendon J., Ed. – New Directions for Program Evaluation, 1994
The 9 articles of this theme issue stem from a project on alcohol and drug abuse that involved 14 projects, 10 of which began with randomized clinical trials. These papers describe implementation problems associated with experimentation in field research and the focus on ensuring internal validity. (SLD)
Descriptors: Alcohol Abuse, Drug Abuse, Evaluation Methods, Experiments
Peer reviewed Peer reviewed
Bickman, Leonard – New Directions for Program Evaluation, 1985
This chapter describes the implementation and lessons learned from the conduct of three field experiments in education. All three evaluation designs used randomized assignment. Results showed even under very adverse and unstable conditions, randomized designs can be maintained. (LMO)
Descriptors: Attrition (Research Studies), Educational Assessment, Elementary Secondary Education, Evaluation Methods