Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 13 |
Since 2016 (last 10 years) | 33 |
Since 2006 (last 20 years) | 54 |
Descriptor
Error of Measurement | 75 |
Monte Carlo Methods | 75 |
Sample Size | 75 |
Statistical Analysis | 21 |
Computation | 19 |
Comparative Analysis | 18 |
Correlation | 18 |
Statistical Bias | 18 |
Structural Equation Models | 17 |
Effect Size | 15 |
Item Response Theory | 15 |
More ▼ |
Source
Author
Hancock, Gregory R. | 3 |
Yuan, Ke-Hai | 3 |
Cornwell, John M. | 2 |
Fan, Weihua | 2 |
Fan, Xitao | 2 |
Finch, W. Holmes | 2 |
Huang, Francis L. | 2 |
McCoach, D. Betsy | 2 |
Stark, Stephen | 2 |
Ackerman, Terry A. | 1 |
Ahn, Soyeon | 1 |
More ▼ |
Publication Type
Journal Articles | 58 |
Reports - Research | 52 |
Reports - Evaluative | 17 |
Speeches/Meeting Papers | 11 |
Dissertations/Theses -… | 4 |
Information Analyses | 1 |
Numerical/Quantitative Data | 1 |
Reports - Descriptive | 1 |
Education Level
Elementary Education | 1 |
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Longitudinal… | 1 |
What Works Clearinghouse Rating
Phillip K. Wood – Structural Equation Modeling: A Multidisciplinary Journal, 2024
The logistic and confined exponential curves are frequently used in studies of growth and learning. These models, which are nonlinear in their parameters, can be estimated using structural equation modeling software. This paper proposes a single combined model, a weighted combination of both models. Mplus, Proc Calis, and lavaan code for the model…
Descriptors: Structural Equation Models, Computation, Computer Software, Weighted Scores
Hyunjung Lee; Heining Cham – Educational and Psychological Measurement, 2024
Determining the number of factors in exploratory factor analysis (EFA) is crucial because it affects the rest of the analysis and the conclusions of the study. Researchers have developed various methods for deciding the number of factors to retain in EFA, but this remains one of the most difficult decisions in the EFA. The purpose of this study is…
Descriptors: Factor Structure, Factor Analysis, Monte Carlo Methods, Goodness of Fit
Shaojie Wang; Won-Chan Lee; Minqiang Zhang; Lixin Yuan – Applied Measurement in Education, 2024
To reduce the impact of parameter estimation errors on IRT linking results, recent work introduced two information-weighted characteristic curve methods for dichotomous items. These two methods showed outstanding performance in both simulation and pseudo-form pseudo-group analysis. The current study expands upon the concept of information…
Descriptors: Item Response Theory, Test Format, Test Length, Error of Measurement
Rank-Normalization, Folding, and Localization: An Improved [R-Hat] for Assessing Convergence of MCMC
Aki Vehtari; Andrew Gelman; Daniel Simpson; Bob Carpenter; Paul-Christian Burkner – Grantee Submission, 2021
Markov chain Monte Carlo is a key computational tool in Bayesian statistics, but it can be challenging to monitor the convergence of an iterative stochastic algorithm. In this paper we show that the convergence diagnostic [R-hat] of Gelman and Rubin (1992) has serious flaws. Traditional [R-hat] will fail to correctly diagnose convergence failures…
Descriptors: Markov Processes, Monte Carlo Methods, Bayesian Statistics, Efficiency
Wang, Shaojie; Zhang, Minqiang; Lee, Won-Chan; Huang, Feifei; Li, Zonglong; Li, Yixing; Yu, Sufang – Journal of Educational Measurement, 2022
Traditional IRT characteristic curve linking methods ignore parameter estimation errors, which may undermine the accuracy of estimated linking constants. Two new linking methods are proposed that take into account parameter estimation errors. The item- (IWCC) and test-information-weighted characteristic curve (TWCC) methods employ weighting…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Monte Carlo Methods
Simsek, Ahmet Salih – International Journal of Assessment Tools in Education, 2023
Likert-type item is the most popular response format for collecting data in social, educational, and psychological studies through scales or questionnaires. However, there is no consensus on whether parametric or non-parametric tests should be preferred when analyzing Likert-type data. This study examined the statistical power of parametric and…
Descriptors: Error of Measurement, Likert Scales, Nonparametric Statistics, Statistical Analysis
Liu, Yixing; Thompson, Marilyn S. – Journal of Experimental Education, 2022
A simulation study was conducted to explore the impact of differential item functioning (DIF) on general factor difference estimation for bifactor, ordinal data. Common analysis misspecifications in which the generated bifactor data with DIF were fitted using models with equality constraints on noninvariant item parameters were compared under data…
Descriptors: Comparative Analysis, Item Analysis, Sample Size, Error of Measurement
Koçak, Duygu – Pedagogical Research, 2020
Iteration number in Monte Carlo simulation method used commonly in educational research has an effect on Item Response Theory test and item parameters. The related studies show that the number of iteration is at the discretion of the researcher. Similarly, there is no specific number suggested for the number of iteration in the related literature.…
Descriptors: Monte Carlo Methods, Item Response Theory, Educational Research, Test Items
Jobst, Lisa J.; Auerswald, Max; Moshagen, Morten – Educational and Psychological Measurement, 2022
Prior studies investigating the effects of non-normality in structural equation modeling typically induced non-normality in the indicator variables. This procedure neglects the factor analytic structure of the data, which is defined as the sum of latent variables and errors, so it is unclear whether previous results hold if the source of…
Descriptors: Goodness of Fit, Structural Equation Models, Error of Measurement, Factor Analysis
Nazari, Sanaz; Leite, Walter L.; Huggins-Manley, A. Corinne – Journal of Experimental Education, 2023
The piecewise latent growth models (PWLGMs) can be used to study changes in the growth trajectory of an outcome due to an event or condition, such as exposure to an intervention. When there are multiple outcomes of interest, a researcher may choose to fit a series of PWLGMs or a single parallel-process PWLGM. A comparison of these models is…
Descriptors: Growth Models, Statistical Analysis, Intervention, Comparative Analysis
Finch, W. Holmes – Educational and Psychological Measurement, 2020
Exploratory factor analysis (EFA) is widely used by researchers in the social sciences to characterize the latent structure underlying a set of observed indicator variables. One of the primary issues that must be resolved when conducting an EFA is determination of the number of factors to retain. There exist a large number of statistical tools…
Descriptors: Factor Analysis, Goodness of Fit, Social Sciences, Comparative Analysis
Hong, Sanghyun; Reed, W. Robert – Research Synthesis Methods, 2021
The purpose of this study is to show how Monte Carlo analysis of meta-analytic estimators can be used to select estimators for specific research situations. Our analysis conducts 1620 individual experiments, where each experiment is defined by a unique combination of sample size, effect size, effect size heterogeneity, publication selection…
Descriptors: Monte Carlo Methods, Meta Analysis, Research Methodology, Experiments
Fan Pan – ProQuest LLC, 2021
This dissertation informed researchers about the performance of different level-specific and target-specific model fit indices in Multilevel Latent Growth Model (MLGM) using unbalanced design and different trajectories. As the use of MLGMs is a relatively new field, this study helped further the field by informing researchers interested in using…
Descriptors: Goodness of Fit, Item Response Theory, Growth Models, Monte Carlo Methods
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Paulsen, Justin; Valdivia, Dubravka Svetina – Journal of Experimental Education, 2022
Cognitive diagnostic models (CDMs) are a family of psychometric models designed to provide categorical classifications for multiple latent attributes. CDMs provide more granular evidence than other psychometric models and have potential for guiding teaching and learning decisions in the classroom. However, CDMs have primarily been conducted using…
Descriptors: Psychometrics, Classification, Teaching Methods, Learning Processes