NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Higher Education1
Audience
Laws, Policies, & Programs
Job Training Partnership Act…1
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Dodge, Nadine; Chapman, Ralph – International Journal of Social Research Methodology, 2018
Electronically assisted survey techniques offer several advantages over traditional survey techniques. However, they can also potentially introduce biases, such as coverage biases and measurement error. The current study compares the relative merits of two survey distribution and completion modes: email recruitment with internet completion; and…
Descriptors: Online Surveys, Handheld Devices, Bias, Electronic Mail
Peer reviewed Peer reviewed
Direct linkDirect link
Gorard, Stephen – International Journal of Research & Method in Education, 2013
Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…
Descriptors: Pretests Posttests, Research Design, Comparative Analysis, Data Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tang, Yang; Cook, Thomas D.; Kisbu-Sakarya, Yasemin – Society for Research on Educational Effectiveness, 2015
Regression discontinuity design (RD) has been widely used to produce reliable causal estimates. Researchers have validated the accuracy of RD design using within study comparisons (Cook, Shadish & Wong, 2008; Cook & Steiner, 2010; Shadish et al, 2011). Within study comparisons examines the validity of a quasi-experiment by comparing its…
Descriptors: Pretests Posttests, Statistical Bias, Accuracy, Regression (Statistics)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rindskopf, David – Society for Research on Educational Effectiveness, 2013
Single case designs (SCDs) generally consist of a small number of short time series in two or more phases. The analysis of SCDs statistically fits in the framework of a multilevel model, or hierarchical model. The usual analysis does not take into account the uncertainty in the estimation of the random effects. This not only has an effect on the…
Descriptors: Research Design, Bayesian Statistics, Computation, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher – Journal of Research on Educational Effectiveness, 2016
Experimental evaluations that involve the educational system usually involve a hierarchical structure (students are nested within classrooms that are nested within schools, etc.). Concerns about contamination, where research subjects receive certain features of an intervention intended for subjects in a different experimental group, have often led…
Descriptors: Educational Experiments, Error of Measurement, Research Design, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Dong, Nianbo – American Journal of Evaluation, 2015
Researchers have become increasingly interested in programs' main and interaction effects of two variables (A and B, e.g., two treatment variables or one treatment variable and one moderator) on outcomes. A challenge for estimating main and interaction effects is to eliminate selection bias across A-by-B groups. I introduce Rubin's causal model to…
Descriptors: Probability, Statistical Analysis, Research Design, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Cook, Thomas D. – Journal of Policy Analysis and Management, 2013
The sharp regression discontinuity design (RDD) has three key weaknesses compared to the randomized clinical trial (RCT). It has lower statistical power, it is more dependent on statistical modeling assumptions, and its treatment effect estimates are limited to the narrow subpopulation of cases immediately around the cutoff, which is rarely of…
Descriptors: Regression (Statistics), Research Design, Statistical Analysis, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Geiser, Christian; Lockhart, Ginger – Psychological Methods, 2012
Latent state-trait (LST) analysis is frequently applied in psychological research to determine the degree to which observed scores reflect stable person-specific effects, effects of situations and/or person-situation interactions, and random measurement error. Most LST applications use multiple repeatedly measured observed variables as indicators…
Descriptors: Psychological Studies, Simulation, Measurement, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Fan, Xitao; Nowell, Dana L. – Gifted Child Quarterly, 2011
This methodological brief introduces the readers to the propensity score matching method, which can be used for enhancing the validity of causal inferences in research situations involving nonexperimental design or observational research, or in situations where the benefits of an experimental design are not fully realized because of reasons beyond…
Descriptors: Research Design, Educational Research, Statistical Analysis, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Evaluation Review, 2009
In social policy evaluations, the multiple testing problem occurs due to the many hypothesis tests that are typically conducted across multiple outcomes and subgroups, which can lead to spurious impact findings. This article discusses a framework for addressing this problem that balances Types I and II errors. The framework involves specifying…
Descriptors: Policy, Evaluation, Testing Problems, Hypothesis Testing
Hedley, R. Alan – 1981
The author conducted a cross national analysis of sociological research reported in leading journals at two points in time over a ten year period to determine if sociologists' ability to produce valid social generalizations had improved significantly over the recent past. The official journals of the United States (American Sociological Review),…
Descriptors: Comparative Analysis, Error of Measurement, Generalization, Reliability
Peer reviewed Peer reviewed
Berger, Martjin P. F. – Applied Psychological Measurement, 1991
A generalized variance criterion is proposed to measure efficiency in item-response-theory (IRT) models. Heuristic arguments are given to formulate the efficiency of a design in terms of an asymptotic generalized variance criterion. Efficiencies of designs for one-, two-, and three-parameter models are compared. (SLD)
Descriptors: Comparative Analysis, Efficiency, Equations (Mathematics), Error of Measurement
Zeng, Lingjia – 1991
Large sample standard errors of linear equating for the single-group design are derived without making the normality assumption. Two general methods based on the delta method of M. Kendall and A. Stuart (1977) are described. One method uses the exact partial derivatives, and the other uses numerical derivatives. Simulation using the beta-binomial…
Descriptors: Comparative Analysis, Computer Simulation, Equated Scores, Equations (Mathematics)
Helberg, Clay – 1996
Abuses and misuses of statistics are frequent. This digest attempts to warn against these in three broad classes of pitfalls: sources of bias, errors of methodology, and misinterpretation of results. Sources of bias are conditions or circumstances that affect the external validity of statistical results. In order for a researcher to make…
Descriptors: Causal Models, Comparative Analysis, Data Analysis, Error of Measurement
Previous Page | Next Page ยป
Pages: 1  |  2