Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 17 |
Since 2006 (last 20 years) | 44 |
Descriptor
Correlation | 70 |
Monte Carlo Methods | 70 |
Sample Size | 70 |
Computation | 22 |
Comparative Analysis | 19 |
Error of Measurement | 18 |
Simulation | 17 |
Statistical Analysis | 14 |
Statistical Bias | 14 |
Effect Size | 13 |
Evaluation Methods | 13 |
More ▼ |
Source
Author
Finch, W. Holmes | 3 |
Porter, Kristin E. | 3 |
Cornwell, John M. | 2 |
Fan, Xitao | 2 |
Hittner, James B. | 2 |
May, Kim | 2 |
Murphy, Daniel L. | 2 |
Pituch, Keenan A. | 2 |
Wang, Lin | 2 |
Afshartous, David | 1 |
Ahn, Soyeon | 1 |
More ▼ |
Publication Type
Journal Articles | 56 |
Reports - Research | 40 |
Reports - Evaluative | 21 |
Speeches/Meeting Papers | 7 |
Dissertations/Theses -… | 3 |
Guides - Non-Classroom | 3 |
Reports - Descriptive | 3 |
Education Level
Elementary Education | 1 |
Grade 4 | 1 |
Secondary Education | 1 |
Audience
Researchers | 5 |
Location
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
National Longitudinal Study… | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Tong-Rong Yang; Li-Jen Weng – Structural Equation Modeling: A Multidisciplinary Journal, 2024
In Savalei's (2011) simulation that evaluated the performance of polychoric correlation estimates in small samples, two methods for treating zero-frequency cells, adding 0.5 (ADD) and doing nothing (NONE), were compared. Savalei tentatively suggested using ADD for binary data and NONE for data with three or more categories. Yet, Savalei's…
Descriptors: Correlation, Statistical Distributions, Monte Carlo Methods, Sample Size
Novak, Josip; Rebernjak, Blaž – Measurement: Interdisciplinary Research and Perspectives, 2023
A Monte Carlo simulation study was conducted to examine the performance of [alpha], [lambda]2, [lambda][subscript 4], [lambda][subscript 2], [omega][subscript T], GLB[subscript MRFA], and GLB[subscript Algebraic] coefficients. Population reliability, distribution shape, sample size, test length, and number of response categories were varied…
Descriptors: Monte Carlo Methods, Evaluation Methods, Reliability, Simulation
Mangino, Anthony A.; Bolin, Jocelyn H.; Finch, W. Holmes – Educational and Psychological Measurement, 2023
This study seeks to compare fixed and mixed effects models for the purposes of predictive classification in the presence of multilevel data. The first part of the study utilizes a Monte Carlo simulation to compare fixed and mixed effects logistic regression and random forests. An applied examination of the prediction of student retention in the…
Descriptors: Prediction, Classification, Monte Carlo Methods, Foreign Countries
Fatih Orcan – International Journal of Assessment Tools in Education, 2023
Among all, Cronbach's Alpha and McDonald's Omega are commonly used for reliability estimations. The alpha uses inter-item correlations while omega is based on a factor analysis result. This study uses simulated ordinal data sets to test whether the alpha and omega produce different estimates. Their performances were compared according to the…
Descriptors: Statistical Analysis, Monte Carlo Methods, Correlation, Factor Analysis
Sedat Sen; Allan S. Cohen – Educational and Psychological Measurement, 2024
A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's…
Descriptors: Goodness of Fit, Item Response Theory, Sample Size, Classification
Ames, Allison J.; Myers, Aaron J. – Educational and Psychological Measurement, 2021
Contamination of responses due to extreme and midpoint response style can confound the interpretation of scores, threatening the validity of inferences made from survey responses. This study incorporated person-level covariates in the multidimensional item response tree model to explain heterogeneity in response style. We include an empirical…
Descriptors: Response Style (Tests), Item Response Theory, Longitudinal Studies, Adolescents
Finch, W. Holmes – Educational and Psychological Measurement, 2020
Exploratory factor analysis (EFA) is widely used by researchers in the social sciences to characterize the latent structure underlying a set of observed indicator variables. One of the primary issues that must be resolved when conducting an EFA is determination of the number of factors to retain. There exist a large number of statistical tools…
Descriptors: Factor Analysis, Goodness of Fit, Social Sciences, Comparative Analysis
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Green, Samuel; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2018
Parallel analysis (PA) assesses the number of factors in exploratory factor analysis. Traditionally PA compares the eigenvalues for a sample correlation matrix with the eigenvalues for correlation matrices for 100 comparison datasets generated such that the variables are independent, but this approach uses the wrong reference distribution. The…
Descriptors: Factor Analysis, Accuracy, Statistical Distributions, Comparative Analysis
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Huang, Francis L. – Educational and Psychological Measurement, 2018
Cluster randomized trials involving participants nested within intact treatment and control groups are commonly performed in various educational, psychological, and biomedical studies. However, recruiting and retaining intact groups present various practical, financial, and logistical challenges to evaluators and often, cluster randomized trials…
Descriptors: Multivariate Analysis, Sampling, Statistical Inference, Data Analysis
Lee, Soo; Suh, Youngsuk – Journal of Educational Measurement, 2018
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
Descriptors: Item Response Theory, Sample Size, Models, Error of Measurement
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Asún, Rodrigo A.; Rdz-Navarro, Karina; Alvarado, Jesús M. – Sociological Methods & Research, 2016
This study compares the performance of two approaches in analysing four-point Likert rating scales with a factorial model: the classical factor analysis (FA) and the item factor analysis (IFA). For FA, maximum likelihood and weighted least squares estimations using Pearson correlation matrices among items are compared. For IFA, diagonally weighted…
Descriptors: Likert Scales, Item Analysis, Factor Analysis, Comparative Analysis