NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hening – Research Synthesis Methods, 2023
Many statistical methods (estimators) are available for estimating the consensus value (or average effect) and heterogeneity variance in interlaboratory studies or meta-analyses. These estimators are all valid because they are developed from or supported by certain statistical principles. However, no estimator can be perfect and must have error or…
Descriptors: Statistical Analysis, Computation, Measurement Techniques, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Erik-Jan van Kesteren; Daniel L. Oberski – Structural Equation Modeling: A Multidisciplinary Journal, 2022
Structural equation modeling (SEM) is being applied to ever more complex data types and questions, often requiring extensions such as regularization or novel fitting functions. To extend SEM, researchers currently need to completely reformulate SEM and its optimization algorithm -- a challenging and time-consuming task. In this paper, we introduce…
Descriptors: Structural Equation Models, Computation, Graphs, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Mulder, J.; Raftery, A. E. – Sociological Methods & Research, 2022
The Schwarz or Bayesian information criterion (BIC) is one of the most widely used tools for model comparison in social science research. The BIC, however, is not suitable for evaluating models with order constraints on the parameters of interest. This article explores two extensions of the BIC for evaluating order-constrained models, one where a…
Descriptors: Models, Social Science Research, Programming Languages, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Ren, Hao – Journal of Educational and Behavioral Statistics, 2020
The Bayesian way of accounting for the effects of error in the ability and item parameters in adaptive testing is through the joint posterior distribution of all parameters. An optimized Markov chain Monte Carlo algorithm for adaptive testing is presented, which samples this distribution in real time to score the examinee's ability and optimally…
Descriptors: Bayesian Statistics, Adaptive Testing, Error of Measurement, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Ji Seung; Hansen, Mark; Cai, Li – Educational and Psychological Measurement, 2012
Traditional estimators of item response theory scale scores ignore uncertainty carried over from the item calibration process, which can lead to incorrect estimates of the standard errors of measurement (SEMs). Here, the authors review a variety of approaches that have been applied to this problem and compare them on the basis of their statistical…
Descriptors: Item Response Theory, Scores, Statistical Analysis, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hoshino, Takahiro; Shigemasu, Kazuo – Applied Psychological Measurement, 2008
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Descriptors: Monte Carlo Methods, Markov Processes, Factor Analysis, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rolstad, Kellie; Mahoney, Kate; Glass, Gene V. – Journal of Educational Research & Policy Studies, 2008
In light of a recent revelation that Gersten (1985) included erroneous information on one of two programs for English Language Learners (ELLs), the authors re-calculate results of their earlier meta-analysis of program effectiveness studies for ELLs in which Gersten's studies had behaved as outliers (Rolstad, Mahoney & Glass, 2005). The correction…
Descriptors: Bilingual Education, Second Language Learning, Program Effectiveness, Effect Size
De Ayala, R. J.; And Others – 1995
Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…
Descriptors: Adaptive Testing, Bayesian Statistics, Error of Measurement, Estimation (Mathematics)
Wingersky, Marilyn S. – 1989
In a variable-length adaptive test with a stopping rule that relied on the asymptotic standard error of measurement of the examinee's estimated true score, M. S. Stocking (1987) discovered that it was sufficient to know the examinee's true score and the number of items administered to predict with some accuracy whether an examinee's true score was…
Descriptors: Adaptive Testing, Bayesian Statistics, Error of Measurement, Estimation (Mathematics)
Fox, Jean-Paul; Glas, Cees A. W. – 1998
A two-level regression model is imposed on the ability parameters in an item response theory (IRT) model. The advantage of using latent rather than observed scores as dependent variables of a multilevel model is that this offers the possibility of separating the influence of item difficulty and ability level and modeling response variation and…
Descriptors: Ability, Bayesian Statistics, Difficulty Level, Error of Measurement
Jo, See-Heyon – 1995
The question of how to analyze unbalanced hierarchical data generated from structural equation models has been a common problem for researchers and analysts. Among difficulties plaguing statistical modeling are estimation bias due to measurement error and the estimation of the effects of the individual's hierarchical social milieu. This paper…
Descriptors: Algorithms, Bayesian Statistics, Equations (Mathematics), Error of Measurement
van der Linden, Wim J. – 1996
R. J. Owen (1975) proposed an approximate empirical Bayes procedure for item selection in adaptive testing. The procedure replaces the true posterior by a normal approximation with closed-form expressions for its first two moments. This approximation was necessary to minimize the computational complexity involved in a fully Bayesian approach, but…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computation
Peer reviewed Peer reviewed
Kim, Seock-Ho; And Others – Applied Psychological Measurement, 1994
Type I error rates of F. M. Lord's chi square test for differential item functioning were investigated using Monte Carlo simulations with marginal maximum likelihood estimation and marginal Bayesian estimation algorithms. Lord's chi square did not provide useful Type I error control for the three-parameter logistic model at these sample sizes.…
Descriptors: Algorithms, Bayesian Statistics, Chi Square, Error of Measurement
Peer reviewed Peer reviewed
De Ayala, R. J. – Educational and Psychological Measurement, 1992
Effects of dimensionality on ability estimation of an adaptive test were examined using generated data in Bayesian computerized adaptive testing (CAT) simulations. Generally, increasing interdimensional difficulty association produced a slight decrease in test length and an increase in accuracy of ability estimation as assessed by root mean square…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Computer Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Yuan H.; Lissitz, Robert W. – Journal of Educational Measurement, 2004
The analytically derived asymptotic standard errors (SEs) of maximum likelihood (ML) item estimates can be approximated by a mathematical function without examinees' responses to test items, and the empirically determined SEs of marginal maximum likelihood estimation (MMLE)/Bayesian item estimates can be obtained when the same set of items is…
Descriptors: Test Items, Computation, Item Response Theory, Error of Measurement
Previous Page | Next Page ยป
Pages: 1  |  2