NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers3
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 94 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xin Guo; Qiang Fu – Sociological Methods & Research, 2024
Grouped and right-censored (GRC) counts have been used in a wide range of attitudinal and behavioural surveys yet they cannot be readily analyzed or assessed by conventional statistical models. This study develops a unified regression framework for the design and optimality of GRC counts in surveys. To process infinitely many grouping schemes for…
Descriptors: Attitude Measures, Surveys, Research Design, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Dai; Yang Du; Jennifer Cromley; Tia Fechter; Frank Nelson – Journal of Experimental Education, 2024
Simple matrix sampling planned missing (SMS PD) design, introduce missing data patterns that lead to covariances between variables that are not jointly observed, and create difficulties for analyses other than mean and variance estimations. Based on prior research, we adopted a new multigroup confirmatory factor analysis (CFA) approach to handle…
Descriptors: Research Problems, Research Design, Data, Matrices
Peer reviewed Peer reviewed
Direct linkDirect link
Shiyu Zhang; James Wagner – Sociological Methods & Research, 2024
Adaptive survey design refers to using targeted procedures to recruit different sampled cases. This technique strives to reduce bias and variance of survey estimates by trying to recruit a larger and more balanced set of respondents. However, it is not well understood how adaptive design can improve data and survey estimates beyond the…
Descriptors: Surveys, Research Design, Response Rates (Questionnaires), Demography
Peer reviewed Peer reviewed
Direct linkDirect link
Duane Knudson – Measurement in Physical Education and Exercise Science, 2025
Small sample sizes contribute to several problems in research and knowledge advancement. This conceptual replication study confirmed and extended the inflation of type II errors and confidence intervals in correlation analyses of small sample sizes common in kinesiology/exercise science. Current population data (N = 18, 230, & 464) on four…
Descriptors: Kinesiology, Exercise, Biomechanics, Movement Education
Peer reviewed Peer reviewed
Direct linkDirect link
Kaltsonoudi, Kalliope; Tsigilis, Nikolaos; Karteroliotis, Konstantinos – Measurement in Physical Education and Exercise Science, 2022
Common method variance refers to the amount of uncontrolled systematic error leading to biased estimates of scale reliability and validity and to spurious covariance shared among variables due to common method and/or common source employed in survey-based researches. As the extended use of self-report questionnaires is inevitable, numerous studies…
Descriptors: Athletics, Research, Research Methodology, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jeffrey Matayoshi; Shamya Karumbaiah – Journal of Educational Data Mining, 2024
Various areas of educational research are interested in the transitions between different states--or events--in sequential data, with the goal of understanding the significance of these transitions; one notable example is affect dynamics, which aims to identify important transitions between affective states. Unfortunately, several works have…
Descriptors: Models, Statistical Bias, Data Analysis, Simulation
Wendy Chan; Larry Vernon Hedges – Journal of Educational and Behavioral Statistics, 2022
Multisite field experiments using the (generalized) randomized block design that assign treatments to individuals within sites are common in education and the social sciences. Under this design, there are two possible estimands of interest and they differ based on whether sites or blocks have fixed or random effects. When the average treatment…
Descriptors: Research Design, Educational Research, Statistical Analysis, Statistical Inference
Kush, Joseph M.; Konold, Timothy R.; Bradshaw, Catherine P. – Educational and Psychological Measurement, 2022
Multilevel structural equation modeling (MSEM) allows researchers to model latent factor structures at multiple levels simultaneously by decomposing within- and between-group variation. Yet the extent to which the sampling ratio (i.e., proportion of cases sampled from each group) influences the results of MSEM models remains unknown. This article…
Descriptors: Structural Equation Models, Factor Structure, Statistical Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Jamshidi, Laleh; Declercq, Lies; Fernández-Castilla, Belén; Ferron, John M.; Moeyaert, Mariola; Beretvas, S. Natasha; Van den Noortgate, Wim – Journal of Experimental Education, 2021
Previous research found bias in the estimate of the overall fixed effects and variance components using multilevel meta-analyses of standardized single-case data. Therefore, we evaluate two adjustments in an attempt to reduce the bias and improve the statistical properties of the parameter estimates. The results confirm the existence of bias when…
Descriptors: Statistical Bias, Multivariate Analysis, Meta Analysis, Research Design
Qinyun Lin; Amy K. Nuttall; Qian Zhang; Kenneth A. Frank – Grantee Submission, 2023
Empirical studies often demonstrate multiple causal mechanisms potentially involving simultaneous or causally related mediators. However, researchers often use simple mediation models to understand the processes because they do not or cannot measure other theoretically relevant mediators. In such cases, another potentially relevant but unobserved…
Descriptors: Causal Models, Mediation Theory, Error of Measurement, Statistical Inference
Ella Patrona; John Ferron; Arnold Olszewski; Elizabeth Kelley; Howard Goldstein – Journal of Speech, Language, and Hearing Research, 2022
Purpose: Systematic reviews of literature are routinely conducted to identify practices that are effective in addressing educational and clinical problems. One complication, however, is how best to combine data from both group experimental design (GED) studies and single-case experimental design (SCED) studies. Percent of Goal Obtained (PoGO) has…
Descriptors: Preschool Children, Vocabulary Development, Intervention, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Angela Johnson; Elizabeth Barker; Marcos Viveros Cespedes – Educational Measurement: Issues and Practice, 2024
Educators and researchers strive to build policies and practices on data and evidence, especially on academic achievement scores. When assessment scores are inaccurate for specific student populations or when scores are inappropriately used, even data-driven decisions will be misinformed. To maximize the impact of the research-practice-policy…
Descriptors: Equal Education, Inclusion, Evaluation Methods, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Sanghyun; Reed, W. Robert – Research Synthesis Methods, 2021
The purpose of this study is to show how Monte Carlo analysis of meta-analytic estimators can be used to select estimators for specific research situations. Our analysis conducts 1620 individual experiments, where each experiment is defined by a unique combination of sample size, effect size, effect size heterogeneity, publication selection…
Descriptors: Monte Carlo Methods, Meta Analysis, Research Methodology, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Cartwright, Nancy – Educational Research and Evaluation, 2019
Across the evidence-based policy and practice (EBPP) community, including education, randomised controlled trials (RCTS) rank as the most "rigorous" evidence for causal conclusions. This paper argues that that is misleading. Only narrow conclusions about study populations can be warranted with the kind of "rigour" that RCTs…
Descriptors: Evidence Based Practice, Educational Policy, Randomized Controlled Trials, Error of Measurement
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7