NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers5
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wendy Chan; Jimin Oh; Chen Li; Jiexuan Huang; Yeran Tong – Society for Research on Educational Effectiveness, 2023
Background: The generalizability of a study's results continues to be at the forefront of concerns in evaluation research in education (Tipton & Olsen, 2018). Over the past decade, statisticians have developed methods, mainly based on propensity scores, to improve generalizations in the absence of random sampling (Stuart et al., 2011; Tipton,…
Descriptors: Generalizability Theory, Probability, Scores, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Diego Cortes; Dirk Hastedt; Sabine Meinck – Large-scale Assessments in Education, 2025
This paper informs users of data collected in international large-scale assessments (ILSA), by presenting argumentsunderlining the importance of considering two design features employed in these studies. We examine a commonmisconception stating that the uncertainty arising from the assessment design is negligible compared with that arisingfrom the…
Descriptors: Sampling, Research Design, Educational Assessment, Statistical Inference
Yanli Xie – ProQuest LLC, 2022
The purpose of this dissertation is to develop principles and strategies for and identify limitations of multisite cluster randomized trials in the context of partially and fully nested designs. In the first study, I develop principles of estimation, sampling variability, and inference for studies that leverage multisite designs within the context…
Descriptors: Randomized Controlled Trials, Research Design, Computation, Sampling
Makela, Susanna; Si, Yajuan; Gelman, Andrew – Grantee Submission, 2018
Cluster sampling is common in survey practice, and the corresponding inference has been predominantly design-based. We develop a Bayesian framework for cluster sampling and account for the design effect in the outcome modeling. We consider a two-stage cluster sampling design where the clusters are first selected with probability proportional to…
Descriptors: Bayesian Statistics, Statistical Inference, Sampling, Probability
Natesan, Prathiba; Hedges, Larry V. – Grantee Submission, 2016
Although immediacy is one of the necessary criteria to show strong evidence of a causal relation in SCDs, no inferential statistical tool is currently used to demonstrate it. We propose a Bayesian unknown change-point model to investigate and quantify immediacy in SCD analysis. Unlike visual analysis that considers only 3-5 observations in…
Descriptors: Bayesian Statistics, Statistical Inference, Research Design, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Gu, Fei; Preacher, Kristopher J.; Ferrer, Emilio – Journal of Educational and Behavioral Statistics, 2014
Mediation is a causal process that evolves over time. Thus, a study of mediation requires data collected throughout the process. However, most applications of mediation analysis use cross-sectional rather than longitudinal data. Another implicit assumption commonly made in longitudinal designs for mediation analysis is that the same mediation…
Descriptors: Statistical Analysis, Models, Research Design, Case Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Stapleton, Laura M.; Pituch, Keenan A.; Dion, Eric – Journal of Experimental Education, 2015
This article presents 3 standardized effect size measures to use when sharing results of an analysis of mediation of treatment effects for cluster-randomized trials. The authors discuss 3 examples of mediation analysis (upper-level mediation, cross-level mediation, and cross-level mediation with a contextual effect) with demonstration of the…
Descriptors: Effect Size, Measurement Techniques, Statistical Analysis, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim – Journal of Experimental Education, 2014
A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…
Descriptors: Effect Size, Statistical Bias, Sample Size, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J. – New Directions for Evaluation, 2013
The goal of this chapter is to recommend quality criteria to guide evaluators' selections of sampling designs when mixing approaches. First, we contextualize our discussion of quality criteria and sampling designs by discussing the concept of interpretive consistency and how it impacts sampling decisions. Embedded in this discussion are…
Descriptors: Sampling, Mixed Methods Research, Evaluators, Q Methodology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Onwuegbuzie, Anthony J.; Collins, Kathleen M. T. – Qualitative Report, 2007
This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…
Descriptors: Social Science Research, Qualitative Research, Methods Research, Sample Size
Kish, Leslie – 1989
A brief, practical overview of "design effects" (DEFFs) is presented for users of the results of sample surveys. The overview is intended to help such users to determine how and when to use DEFFs and to compute them correctly. DEFFs are needed only for inferential statistics, not for descriptive statistics. When the selections for…
Descriptors: Computer Software, Error of Measurement, Mathematical Models, Research Design
Peer reviewed Peer reviewed
Suen, Hoi K. – Topics in Early Childhood Special Education, 1992
This commentary on EC 603 695 argues that significance testing is a necessary but insufficient condition for positivistic research, that judgment-based assessment and single-subject research are not substitutes for significance testing, and that sampling fluctuation should be considered as one of numerous epistemological concerns in any…
Descriptors: Evaluation Methods, Evaluative Thinking, Research Design, Research Methodology
Peer reviewed Peer reviewed
Da Prato, Robert A. – Topics in Early Childhood Special Education, 1992
This paper argues that judgment-based assessment of data from multiply replicated single-subject or small-N studies should replace normative-based (p=less than 0.05) assessment of large-N research in the clinical sciences, and asserts that inferential statistics should be abandoned as a method of evaluating clinical research data. (Author/JDD)
Descriptors: Evaluation Methods, Evaluative Thinking, Norms, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas, Scott L.; Heck, Ronald H.; Bauer, Karen W. – New Directions for Institutional Research, 2005
Institutional researchers frequently use national datasets such as those provided by the National Center for Education Statistics (NCES). The authors of this chapter explore the adjustments required when analyzing NCES data collected using complex sample designs. (Contains 8 tables.)
Descriptors: Institutional Research, National Surveys, Sampling, Data Analysis
Mislevy, Robert J. – 1985
A method for drawing inferences from complex samples is based on Rubin's approach to missing data in survey research. Standard procedures for drawing such inferences do not apply when the variables of interest are not observed directly, but must be inferred from secondary random variables which depend on the variables of interest stochastically.…
Descriptors: Algorithms, Data Interpretation, Estimation (Mathematics), Latent Trait Theory
Previous Page | Next Page ยป
Pages: 1  |  2