Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 21 |
Since 2006 (last 20 years) | 39 |
Descriptor
Comparative Analysis | 46 |
Error of Measurement | 46 |
Item Response Theory | 16 |
Models | 15 |
Monte Carlo Methods | 14 |
Correlation | 13 |
Sample Size | 13 |
Statistical Analysis | 13 |
Computation | 11 |
Simulation | 11 |
Factor Analysis | 10 |
More ▼ |
Source
Educational and Psychological… | 46 |
Author
Cai, Li | 4 |
Paek, Insu | 3 |
Finch, W. Holmes | 2 |
Koziol, Natalie A. | 2 |
Ahn, Soyeon | 1 |
Alamri, Abeer A. | 1 |
Alderman, Donald L. | 1 |
Algina, James | 1 |
Aydin, Burak | 1 |
Ayers, Elizabeth | 1 |
Bergeman, C. S. | 1 |
More ▼ |
Publication Type
Journal Articles | 44 |
Reports - Research | 35 |
Reports - Evaluative | 9 |
Education Level
Higher Education | 2 |
Secondary Education | 2 |
Adult Education | 1 |
Elementary Education | 1 |
Grade 7 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Audience
Location
Canada | 1 |
Chile | 1 |
Saudi Arabia | 1 |
South Korea | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Law School Admission Test | 1 |
Program for International… | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Han, Yuting; Zhang, Jihong; Jiang, Zhehan; Shi, Dexin – Educational and Psychological Measurement, 2023
In the literature of modern psychometric modeling, mostly related to item response theory (IRT), the fit of model is evaluated through known indices, such as X[superscript 2], M2, and root mean square error of approximation (RMSEA) for absolute assessments as well as Akaike information criterion (AIC), consistent AIC (CAIC), and Bayesian…
Descriptors: Goodness of Fit, Psychometrics, Error of Measurement, Item Response Theory
Fu, Yuanshu; Wen, Zhonglin; Wang, Yang – Educational and Psychological Measurement, 2022
Composite reliability, or coefficient omega, can be estimated using structural equation modeling. Composite reliability is usually estimated under the basic independent clusters model of confirmatory factor analysis (ICM-CFA). However, due to the existence of cross-loadings, the model fit of the exploratory structural equation model (ESEM) is…
Descriptors: Comparative Analysis, Structural Equation Models, Factor Analysis, Reliability
Lee, Bitna; Sohn, Wonsook – Educational and Psychological Measurement, 2022
A Monte Carlo study was conducted to compare the performance of a level-specific (LS) fit evaluation with that of a simultaneous (SI) fit evaluation in multilevel confirmatory factor analysis (MCFA) models. We extended previous studies by examining their performance under MCFA models with different factor structures across levels. In addition,…
Descriptors: Goodness of Fit, Factor Structure, Monte Carlo Methods, Factor Analysis
Wang, Yan; Kim, Eunsook; Ferron, John M.; Dedrick, Robert F.; Tan, Tony X.; Stark, Stephen – Educational and Psychological Measurement, 2021
Factor mixture modeling (FMM) has been increasingly used to investigate unobserved population heterogeneity. This study examined the issue of covariate effects with FMM in the context of measurement invariance testing. Specifically, the impact of excluding and misspecifying covariate effects on measurement invariance testing and class enumeration…
Descriptors: Role, Error of Measurement, Monte Carlo Methods, Models
Murrah, William M. – Educational and Psychological Measurement, 2020
Multiple regression is often used to compare the importance of two or more predictors. When the predictors being compared are measured with error, the estimated coefficients can be biased and Type I error rates can be inflated. This study explores the impact of measurement error on comparing predictors when one is measured with error, followed by…
Descriptors: Error of Measurement, Statistical Bias, Multiple Regression Analysis, Predictor Variables
De Raadt, Alexandra; Warrens, Matthijs J.; Bosker, Roel J.; Kiers, Henk A. L. – Educational and Psychological Measurement, 2019
Cohen's kappa coefficient is commonly used for assessing agreement between classifications of two raters on a nominal scale. Three variants of Cohen's kappa that can handle missing data are presented. Data are considered missing if one or both ratings of a unit are missing. We study how well the variants estimate the kappa value for complete data…
Descriptors: Interrater Reliability, Data, Statistical Analysis, Statistical Bias
Koziol, Natalie A.; Goodrich, J. Marc; Yoon, HyeonJin – Educational and Psychological Measurement, 2022
Differential item functioning (DIF) is often used to examine validity evidence of alternate form test accommodations. Unfortunately, traditional approaches for evaluating DIF are prone to selection bias. This article proposes a novel DIF framework that capitalizes on regression discontinuity design analysis to control for selection bias. A…
Descriptors: Regression (Statistics), Item Analysis, Validity, Testing Accommodations
Finch, W. Holmes – Educational and Psychological Measurement, 2020
Exploratory factor analysis (EFA) is widely used by researchers in the social sciences to characterize the latent structure underlying a set of observed indicator variables. One of the primary issues that must be resolved when conducting an EFA is determination of the number of factors to retain. There exist a large number of statistical tools…
Descriptors: Factor Analysis, Goodness of Fit, Social Sciences, Comparative Analysis
Yesiltas, Gonca; Paek, Insu – Educational and Psychological Measurement, 2020
A log-linear model (LLM) is a well-known statistical method to examine the relationship among categorical variables. This study investigated the performance of LLM in detecting differential item functioning (DIF) for polytomously scored items via simulations where various sample sizes, ability mean differences (impact), and DIF types were…
Descriptors: Simulation, Sample Size, Item Analysis, Scores
Lions, Séverin; Dartnell, Pablo; Toledo, Gabriela; Godoy, María Inés; Córdova, Nora; Jiménez, Daniela; Lemarié, Julie – Educational and Psychological Measurement, 2023
Even though the impact of the position of response options on answers to multiple-choice items has been investigated for decades, it remains debated. Research on this topic is inconclusive, perhaps because too few studies have obtained experimental data from large-sized samples in a real-world context and have manipulated the position of both…
Descriptors: Multiple Choice Tests, Test Items, Item Analysis, Responses
Cain, Meghan K.; Zhang, Zhiyong; Bergeman, C. S. – Educational and Psychological Measurement, 2018
This article serves as a practical guide to mediation design and analysis by evaluating the ability of mediation models to detect a significant mediation effect using limited data. The cross-sectional mediation model, which has been shown to be biased when the mediation is happening over time, is compared with longitudinal mediation models:…
Descriptors: Mediation Theory, Case Studies, Longitudinal Studies, Measurement Techniques
Sideridis, Georgios D.; Tsaousis, Ioannis; Alamri, Abeer A. – Educational and Psychological Measurement, 2020
The main thesis of the present study is to use the Bayesian structural equation modeling (BSEM) methodology of establishing approximate measurement invariance (A-MI) using data from a national examination in Saudi Arabia as an alternative to not meeting strong invariance criteria. Instead, we illustrate how to account for the absence of…
Descriptors: Bayesian Statistics, Structural Equation Models, Foreign Countries, Error of Measurement
Finch, W. Holmes; Shim, Sungok Serena – Educational and Psychological Measurement, 2018
Collection and analysis of longitudinal data is an important tool in understanding growth and development over time in a whole range of human endeavors. Ideally, researchers working in the longitudinal framework are able to collect data at more than two points in time, as this will provide them with the potential for a deeper understanding of the…
Descriptors: Comparative Analysis, Computation, Time, Change
Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C. – Educational and Psychological Measurement, 2018
Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…
Descriptors: Error of Measurement, Testing, Scores, Models
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G. – Educational and Psychological Measurement, 2017
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Descriptors: Testing, Performance, Prediction, Error of Measurement