Publication Date
In 2025 | 0 |
Since 2024 | 13 |
Since 2021 (last 5 years) | 31 |
Since 2016 (last 10 years) | 58 |
Since 2006 (last 20 years) | 69 |
Descriptor
Source
Grantee Submission | 69 |
Author
Chun Wang | 5 |
Ke-Hai Yuan | 5 |
Zhang, Zhiyong | 5 |
Avi Feller | 4 |
Cai, Li | 4 |
Wang, Chun | 4 |
Gongjun Xu | 3 |
Schoen, Robert C. | 3 |
Yang, Xiaotong | 3 |
Yuan, Ke-Hai | 3 |
Zhang, Xue | 3 |
More ▼ |
Publication Type
Reports - Research | 63 |
Journal Articles | 23 |
Speeches/Meeting Papers | 4 |
Reports - Descriptive | 3 |
Reports - Evaluative | 3 |
Information Analyses | 1 |
Education Level
Audience
Researchers | 1 |
Location
Florida | 2 |
Colorado (Denver) | 1 |
Illinois (Chicago) | 1 |
Malawi | 1 |
New York (New York) | 1 |
North Carolina (Charlotte) | 1 |
Tennessee (Memphis) | 1 |
Texas (Dallas) | 1 |
Virginia | 1 |
Washington | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jiaying Xiao; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Accurate item parameters and standard errors (SEs) are crucial for many multidimensional item response theory (MIRT) applications. A recent study proposed the Gaussian Variational Expectation Maximization (GVEM) algorithm to improve computational efficiency and estimation accuracy (Cho et al., 2021). However, the SE estimation procedure has yet to…
Descriptors: Error of Measurement, Models, Evaluation Methods, Item Analysis
Xin Qiao; Akihito Kamata; Cornelis Potgieter – Grantee Submission, 2023
Oral reading fluency (ORF) assessments are commonly used to screen at-risk readers and to evaluate the effectiveness of interventions as curriculum-based measurements. As with other assessments, equating ORF scores becomes necessary when we want to compare ORF scores from different test forms. Recently, Kara et al. (2023) proposed a model-based…
Descriptors: Error of Measurement, Oral Reading, Reading Fluency, Equated Scores
Ethan R. Van Norman; David A. Klingbeil; Adelle K. Sturgell – Grantee Submission, 2024
Single-case experimental designs (SCEDs) have been used with increasing frequency to identify evidence-based interventions in education. The purpose of this study was to explore how several procedural characteristics, including within-phase variability (i.e., measurement error), number of baseline observations, and number of intervention…
Descriptors: Research Design, Case Studies, Effect Size, Error of Measurement
Ashley L. Watts; Ashley L. Greene; Wes Bonifay; Eiko L. Fried – Grantee Submission, 2023
The p-factor is a construct that is thought to explain and maybe even cause variation in all forms of psychopathology. Since its 'discovery' in 2012, hundreds of studies have been dedicated to the extraction and validation of statistical instantiations of the p-factor, called general factors of psychopathology. In this Perspective, we outline five…
Descriptors: Causal Models, Psychopathology, Goodness of Fit, Validity
Weicong Lyu; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Data harmonization is an emerging approach to strategically combining data from multiple independent studies, enabling addressing new research questions that are not answerable by a single contributing study. A fundamental psychometric challenge for data harmonization is to create commensurate measures for the constructs of interest across…
Descriptors: Data Analysis, Test Items, Psychometrics, Item Response Theory
Oscar Clivio; Avi Feller; Chris Holmes – Grantee Submission, 2024
Reweighting a distribution to minimize a distance to a target distribution is a powerful and flexible strategy for estimating a wide range of causal effects, but can be challenging in practice because optimal weights typically depend on knowledge of the underlying data generating process. In this paper, we focus on design-based weights, which do…
Descriptors: Evaluation Methods, Causal Models, Error of Measurement, Guidelines
Dan Soriano; Eli Ben-Michael; Peter Bickel; Avi Feller; Samuel D. Pimentel – Grantee Submission, 2023
Assessing sensitivity to unmeasured confounding is an important step in observational studies, which typically estimate effects under the assumption that all confounders are measured. In this paper, we develop a sensitivity analysis framework for balancing weights estimators, an increasingly popular approach that solves an optimization problem to…
Descriptors: Statistical Analysis, Computation, Mathematical Formulas, Monte Carlo Methods
Cornelis Potgieter; Xin Qiao; Akihito Kamata; Yusuf Kara – Grantee Submission, 2024
As part of the effort to develop an improved oral reading fluency (ORF) assessment system, Kara et al. (2020) estimated the ORF scores based on a latent variable psychometric model of accuracy and speed for ORF data via a fully Bayesian approach. This study further investigates likelihood-based estimators for the model-derived ORF scores,…
Descriptors: Oral Reading, Reading Fluency, Scores, Psychometrics
Ke-Hai Yuan; Yongfei Fang – Grantee Submission, 2023
Observational data typically contain measurement errors. Covariance-based structural equation modelling (CB-SEM) is capable of modelling measurement errors and yields consistent parameter estimates. In contrast, methods of regression analysis using weighted composites as well as a partial least squares approach to SEM facilitate the prediction and…
Descriptors: Structural Equation Models, Regression (Statistics), Weighted Scores, Comparative Analysis
Ke-Hai Yuan; Ling Ling; Zhiyong Zhang – Grantee Submission, 2024
Data in social and behavioral sciences typically contain measurement errors and do not have predefined metrics. Structural equation modeling (SEM) is widely used for the analysis of such data, where the scales of the manifest and latent variables are often subjective. This article studies how the model, parameter estimates, their standard errors…
Descriptors: Structural Equation Models, Computation, Social Science Research, Error of Measurement
Rank-Normalization, Folding, and Localization: An Improved [R-Hat] for Assessing Convergence of MCMC
Aki Vehtari; Andrew Gelman; Daniel Simpson; Bob Carpenter; Paul-Christian Burkner – Grantee Submission, 2021
Markov chain Monte Carlo is a key computational tool in Bayesian statistics, but it can be challenging to monitor the convergence of an iterative stochastic algorithm. In this paper we show that the convergence diagnostic [R-hat] of Gelman and Rubin (1992) has serious flaws. Traditional [R-hat] will fail to correctly diagnose convergence failures…
Descriptors: Markov Processes, Monte Carlo Methods, Bayesian Statistics, Efficiency
Avi Feller; Maia C. Connors; Christina Weiland; John Q. Easton; Stacy B. Ehrlich; John Francis; Sarah E. Kabourek; Diana Leyva; Anna Shapiro; Gloria Yeomans-Maldonado – Grantee Submission, 2024
One part of COVID-19's staggering impact on education has been to suspend or fundamentally alter ongoing education research projects. This article addresses how to analyze the simple but fundamental example of a multi-cohort study in which student assessment data for the final cohort are missing because schools were closed, learning was virtual,…
Descriptors: COVID-19, Pandemics, Kindergarten, Preschool Children
Qinyun Lin; Amy K. Nuttall; Qian Zhang; Kenneth A. Frank – Grantee Submission, 2023
Empirical studies often demonstrate multiple causal mechanisms potentially involving simultaneous or causally related mediators. However, researchers often use simple mediation models to understand the processes because they do not or cannot measure other theoretically relevant mediators. In such cases, another potentially relevant but unobserved…
Descriptors: Causal Models, Mediation Theory, Error of Measurement, Statistical Inference
Josh Leung-Gagné; Sean F. Reardon – Grantee Submission, 2023
Recent studies have shown that U.S. Census-- and American Community Survey (ACS)--based estimates of income segregation are subject to upward finite sampling bias (Logan et al. 2018; Logan et al. 2020; Reardon et al. 2018). We identify two additional sources of bias that are larger and opposite in sign to finite sampling bias: measurement…
Descriptors: Income, Low Income Groups, Social Bias, Statistical Bias
Eli Ben-Michael; Avi Feller; Erin Hartman – Grantee Submission, 2023
In the November 2016 U.S. presidential election, many state level public opinion polls, particularly in the Upper Midwest, incorrectly predicted the winning candidate. One leading explanation for this polling miss is that the precipitous decline in traditional polling response rates led to greater reliance on statistical methods to adjust for the…
Descriptors: Public Opinion, National Surveys, Elections, Political Campaigns