NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Mark, Melvin M. – New Directions for Program Evaluation, 1986
Several validity typologies are overviewed then integrated. The integrated framework is used to contrast the positions of Campbell and Cronback. The current practice and logic of quasi-experimentation is critiqued, and expansions beyond the primary focus of dominant validity typologies are suggested. (BS)
Descriptors: Evaluative Thinking, Generalization, Program Evaluation, Quasiexperimental Design
Peer reviewed Peer reviewed
Cordray, David S. – New Directions for Program Evaluation, 1986
The role of human judgment in the development and synthesis of evidence has not been adequately developed or acknowledged within quasi-experimental analysis. Corrective solutions need to confront the fact that causal analysis within complex environments will require a more active assessment that entails reasoning and statistical modeling.…
Descriptors: Evaluative Thinking, Models, Program Effectiveness, Program Evaluation
Peer reviewed Peer reviewed
Shadish, William R., Jr.; And Others – New Directions for Program Evaluation, 1986
Since usually no defensible option for performing a task within quasi-experimentation is unbiased, it is desirable to select several options that reflect biases in different directions. The benefits of applying a critical multiplism approach to causal hypotheses, group nonequivalence, and units of analysis in quasi-experimentation is discussed.…
Descriptors: Bias, Matched Groups, Program Evaluation, Quasiexperimental Design
Peer reviewed Peer reviewed
Campbell, Donald T. – New Directions for Program Evaluation, 1986
Confusion about the meaning of validity in quasi-experimental research can be addressed by carefully relabeling types of validity. Internal validity can more aptly be termed "local molar causal validity." More tentatively, the "principle of proximal similarity" can be substituted for the concept of external validity. (Author)
Descriptors: Definitions, Quasiexperimental Design, Sampling, Social Science Research
Peer reviewed Peer reviewed
Rindskopf, David – New Directions for Program Evaluation, 1986
Modeling the process by which participants are selected into groups, rather than adjusting for preexisting group differences, provides the basis for several new approaches to the analysis of data from nonrandomized studies. Econometric approaches, the propensity scores approach, and the relative assignment variable approach to the modeling of…
Descriptors: Effect Size, Experimental Groups, Intelligence Quotient, Mathematical Models
Peer reviewed Peer reviewed
Reichardt, Charles; Gollob, Harry – New Directions for Program Evaluation, 1986
Causal models often omit variables that should be included, use variables that are measured fallibly, and ignore time lags. Such practices can lead to severely biased estimates of effects. The discussion explains these biases and shows how to take them into account. (Author)
Descriptors: Effect Size, Error of Measurement, High Schools, Mathematical Models
Peer reviewed Peer reviewed
Bryant, Fred B.; Wortman, Paul M. – New Directions for Program Evaluation, 1984
Methods for selecting relevant and appropriate quasi-experimental studies for inclusion in research synthesis using the threats-to-validity approach are presented. Effects of including and excluding studies are evaluated. (BS)
Descriptors: Evaluation Criteria, Meta Analysis, Quasiexperimental Design, Research Methodology