NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Vehtari, Aki; Gelman, Andrew; Sivula, Tuomas; Jylänki, Pasi; Tran, Dustin; Sahai, Swupnil; Blomstedt, Paul; Cunningham, John P.; Schiminovich, David; Robert, Christian P. – Grantee Submission, 2020
A common divide-and-conquer approach for Bayesian computation with big data is to partition the data, perform local inference for each piece separately, and combine the results to obtain a global posterior approximation. While being conceptually and computationally appealing, this method involves the problematic need to also split the prior for…
Descriptors: Bayesian Statistics, Algorithms, Computation, Generalization
Yao, Yuling; Vehtari, Aki; Gelman, Andrew – Grantee Submission, 2022
When working with multimodal Bayesian posterior distributions, Markov chain Monte Carlo (MCMC) algorithms have difficulty moving between modes, and default variational or mode-based approximate inferences will understate posterior uncertainty. And, even if the most important modes can be found, it is difficult to evaluate their relative weights in…
Descriptors: Bayesian Statistics, Computation, Markov Processes, Monte Carlo Methods
Weber, Sebastian; Gelman, Andrew; Lee, Daniel; Betancourt, Michael; Vehtari, Aki; Racine-Poon, Amy – Grantee Submission, 2018
Throughout the different phases of a drug development program, randomized trials are used to establish the tolerability, safety and efficacy of a candidate drug. At each stage one aims to optimize the design of future studies by extrapolation from the available evidence at the time. This includes collected trial data and relevant external data.…
Descriptors: Bayesian Statistics, Data Analysis, Drug Therapy, Pharmacology
Makela, Susanna; Si, Yajuan; Gelman, Andrew – Grantee Submission, 2018
Cluster sampling is common in survey practice, and the corresponding inference has been predominantly design-based. We develop a Bayesian framework for cluster sampling and account for the design effect in the outcome modeling. We consider a two-stage cluster sampling design where the clusters are first selected with probability proportional to…
Descriptors: Bayesian Statistics, Statistical Inference, Sampling, Probability
Heidemanns, Merlin; Gelman, Andrew; Morris, G. Elliott – Grantee Submission, 2020
During modern general election cycles, information to forecast the electoral outcome is plentiful. So-called fundamentals like economic growth provide information early in the cycle. Trial-heat polls become informative closer to Election Day. Our model builds on (Linzer, 2013) and is implemented in Stan (Team, 2020). We improve on the estimation…
Descriptors: Evaluation, Bayesian Statistics, Elections, Presidents
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; Lee, Daniel; Goodrich, Ben; Betancourt, Michael; Brubaker, Marcus A.; Guo, Jiqiang; Li, Peter; Riddell, Allen – Grantee Submission, 2017
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the…
Descriptors: Programming Languages, Probability, Bayesian Statistics, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent – Journal of Educational and Behavioral Statistics, 2015
When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…
Descriptors: Regression (Statistics), Hierarchical Linear Modeling, Bayesian Statistics, Statistical Inference
Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent – Grantee Submission, 2015
When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix [sigma] of group-level varying coefficients are often degenerate. One can do better, even…
Descriptors: Regression (Statistics), Hierarchical Linear Modeling, Bayesian Statistics, Statistical Inference
Chung, Yeojin; Rabe-Hesketh, Sophia; Gelman, Andrew; Dorie, Vincent; Liu, Jinchen – Society for Research on Educational Effectiveness, 2012
Hierarchical or multilevel linear models are widely used for longitudinal or cross-sectional data on students nested in classes and schools, and are particularly important for estimating treatment effects in cluster-randomized trials, multi-site trials, and meta-analyses. The models can allow for variation in treatment effects, as well as…
Descriptors: Statistical Analysis, Models, Computation, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Gelman, Andrew; Hill, Jennifer; Yajima, Masanao – Journal of Research on Educational Effectiveness, 2012
Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian…
Descriptors: Intervals, Comparative Analysis, Inferences, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Leenen, Iwin; Van Mechelen, Iven; Gelman, Andrew; De Knop, Stijn – Psychometrika, 2008
Hierarchical classes models are models for "N"-way "N"-mode data that represent the association among the "N" modes and simultaneously yield, for each mode, a hierarchical classification of its elements. In this paper we present a stochastic extension of the hierarchical classes model for two-way two-mode binary data. In line with the original…
Descriptors: Simulation, Bayesian Statistics, Models, Classification
Peer reviewed Peer reviewed
Meulders, Michel; De Boeck, Paul; Van Mechelen, Iven; Gelman, Andrew; Maris, Eric – Journal of Educational and Behavioral Statistics, 2001
Presents a fully Bayesian analysis for the Probability Matrix Decomposition (PMD) model using the Gibbs sampler. Identifies the advantages of this approach and illustrates the approach by applying the PMD model to opinions of respondents from different countries concerning the possibility of contracting AIDS in a specific situation. (SLD)
Descriptors: Bayesian Statistics, Matrices, Probability, Psychometrics