NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Secondary Education1
Audience
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 32 results Save | Export
Batley, Prathiba Natesan; Hedges, Larry V. – Grantee Submission, 2021
Although statistical practices to evaluate intervention effects in SCEDs have gained prominence in the recent times, models are yet to incorporate and investigate all their analytic complexities. Most of these statistical models incorporate slopes and autocorrelations both of which contribute to trend in the data. The question that arises is…
Descriptors: Bayesian Statistics, Models, Accuracy, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Poom, Leo; af Wåhlberg, Anders – Research Synthesis Methods, 2022
In meta-analysis, effect sizes often need to be converted into a common metric. For this purpose conversion formulas have been constructed; some are exact, others are approximations whose accuracy has not yet been systematically tested. We performed Monte Carlo simulations where samples with pre-specified population correlations between the…
Descriptors: Meta Analysis, Effect Size, Mathematical Formulas, Monte Carlo Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baris Pekmezci, Fulya; Sengul Avsar, Asiye – International Journal of Assessment Tools in Education, 2021
There is a great deal of research about item response theory (IRT) conducted by simulations. Item and ability parameters are estimated with varying numbers of replications under different test conditions. However, it is not clear what the appropriate number of replications should be. The aim of the current study is to develop guidelines for the…
Descriptors: Item Response Theory, Computation, Accuracy, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Green, Samuel; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2018
Parallel analysis (PA) assesses the number of factors in exploratory factor analysis. Traditionally PA compares the eigenvalues for a sample correlation matrix with the eigenvalues for correlation matrices for 100 comparison datasets generated such that the variables are independent, but this approach uses the wrong reference distribution. The…
Descriptors: Factor Analysis, Accuracy, Statistical Distributions, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes; Shim, Sungok Serena – Educational and Psychological Measurement, 2018
Collection and analysis of longitudinal data is an important tool in understanding growth and development over time in a whole range of human endeavors. Ideally, researchers working in the longitudinal framework are able to collect data at more than two points in time, as this will provide them with the potential for a deeper understanding of the…
Descriptors: Comparative Analysis, Computation, Time, Change
Peer reviewed Peer reviewed
Direct linkDirect link
Guzmán, Sebastián G. – Assessment & Evaluation in Higher Education, 2018
Group projects are widely used in higher education, but they can be problematic if all group members are given the same grade for a project to which they might not have contributed equally. Most scholars recommend addressing these problems by awarding individual grades, computing some kind of individual weighting factor (IWF) from peer and…
Descriptors: Monte Carlo Methods, Grades (Scholastic), Grading, Group Activities
Yildiz, Mustafa – ProQuest LLC, 2017
Student misconceptions have been studied for decades from a curricular/instructional perspective and from the assessment/test level perspective. Numerous misconception assessment tools have been developed in order to measure students' misconceptions relative to the correct content. Often, these tools are used to make a variety of educational…
Descriptors: Misconceptions, Students, Item Response Theory, Models
Potgieter, Cornelis; Kamata, Akihito; Kara, Yusuf – Grantee Submission, 2017
This study proposes a two-part model that includes components for reading accuracy and reading speed. The speed component is a log-normal factor model, for which speed data are measured by reading time for each sentence being assessed. The accuracy component is a binomial-count factor model, where the accuracy data are measured by the number of…
Descriptors: Reading Rate, Oral Reading, Accuracy, Models
Peer reviewed Peer reviewed
Direct linkDirect link
McCoach, D. Betsy; Rifenbark, Graham G.; Newton, Sarah D.; Li, Xiaoran; Kooken, Janice; Yomtov, Dani; Gambino, Anthony J.; Bellara, Aarti – Journal of Educational and Behavioral Statistics, 2018
This study compared five common multilevel software packages via Monte Carlo simulation: HLM 7, M"plus" 7.4, R (lme4 V1.1-12), Stata 14.1, and SAS 9.4 to determine how the programs differ in estimation accuracy and speed, as well as convergence, when modeling multiple randomly varying slopes of different magnitudes. Simulated data…
Descriptors: Hierarchical Linear Modeling, Computer Software, Comparative Analysis, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Wooyeol; Cho, Sun-Joo – Applied Measurement in Education, 2017
Utilizing a longitudinal item response model, this study investigated the effect of item parameter drift (IPD) on item parameters and person scores via a Monte Carlo study. Item parameter recovery was investigated for various IPD patterns in terms of bias and root mean-square error (RMSE), and percentage of time the 95% confidence interval covered…
Descriptors: Item Response Theory, Test Items, Bias, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Martin-Fernandez, Manuel; Revuelta, Javier – Psicologica: International Journal of Methodology and Experimental Psychology, 2017
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Descriptors: Bayesian Statistics, Item Response Theory, Models, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pfaffel, Andreas; Schober, Barbara; Spiel, Christiane – Practical Assessment, Research & Evaluation, 2016
A common methodological problem in the evaluation of the predictive validity of selection methods, e.g. in educational and employment selection, is that the correlation between predictor and criterion is biased. Thorndike's (1949) formulas are commonly used to correct for this biased correlation. An alternative approach is to view the selection…
Descriptors: Comparative Analysis, Correlation, Statistical Bias, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes; French, Brian F. – Journal of Experimental Education, 2014
Latent class analysis is an analytic technique often used in educational and psychological research to identify meaningful groups of individuals within a larger heterogeneous population based on a set of variables. This technique is flexible, encompassing not only a static set of variables but also longitudinal data in the form of growth mixture…
Descriptors: Nonparametric Statistics, Multivariate Analysis, Monte Carlo Methods, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Kenny, David A.; Kaniskan, Burcu; McCoach, D. Betsy – Sociological Methods & Research, 2015
Given that the root mean square error of approximation (RMSEA) is currently one of the most popular measures of goodness-of-model fit within structural equation modeling (SEM), it is important to know how well the RMSEA performs in models with small degrees of freedom ("df"). Unfortunately, most previous work on the RMSEA and its…
Descriptors: Error of Measurement, Models, Goodness of Fit, Structural Equation Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wu, Mike; Davis, Richard L.; Domingue, Benjamin W.; Piech, Chris; Goodman, Noah – International Educational Data Mining Society, 2020
Item Response Theory (IRT) is a ubiquitous model for understanding humans based on their responses to questions, used in fields as diverse as education, medicine and psychology. Large modern datasets offer opportunities to capture more nuances in human behavior, potentially improving test scoring and better informing public policy. Yet larger…
Descriptors: Item Response Theory, Accuracy, Data Analysis, Public Policy
Previous Page | Next Page »
Pages: 1  |  2  |  3