NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 63 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Suppanut Sriutaisuk; Yu Liu; Seungwon Chung; Hanjoe Kim; Fei Gu – Educational and Psychological Measurement, 2025
The multiple imputation two-stage (MI2S) approach holds promise for evaluating the model fit of structural equation models for ordinal variables with multiply imputed data. However, previous studies only examined the performance of MI2S-based residual-based test statistics. This study extends previous research by examining the performance of two…
Descriptors: Structural Equation Models, Error of Measurement, Programming Languages, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Kulinskaya, Elena; Hoaglin, David C. – Research Synthesis Methods, 2023
For estimation of heterogeneity variance T[superscript 2] in meta-analysis of log-odds-ratio, we derive new mean- and median-unbiased point estimators and new interval estimators based on a generalized Q statistic, Q[subscript F], in which the weights depend on only the studies' effective sample sizes. We compare them with familiar estimators…
Descriptors: Q Methodology, Statistical Analysis, Meta Analysis, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Emma Somer; Carl Falk; Milica Miocevic – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Factor Score Regression (FSR) is increasingly employed as an alternative to structural equation modeling (SEM) in small samples. Despite its popularity in psychology, the performance of FSR in multigroup models with small samples remains relatively unknown. The goal of this study was to examine the performance of FSR, namely Croon's correction and…
Descriptors: Scores, Structural Equation Models, Comparative Analysis, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Lin, Zhongtian; Chalmers, Robert Philip – Educational and Psychological Measurement, 2023
To reduce the chance of Heywood cases or nonconvergence in estimating the 2PL or the 3PL model in the marginal maximum likelihood with the expectation-maximization (MML-EM) estimation method, priors for the item slope parameter in the 2PL model or for the pseudo-guessing parameter in the 3PL model can be used and the marginal maximum a posteriori…
Descriptors: Models, Item Response Theory, Test Items, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Stella Y.; Lee, Won-Chan – Journal of Educational Measurement, 2020
The current study aims to evaluate the performance of three non-IRT procedures (i.e., normal approximation, Livingston-Lewis, and compound multinomial) for estimating classification indices when the observed score distribution shows atypical patterns: (a) bimodality, (b) structural (i.e., systematic) bumpiness, or (c) structural zeros (i.e., no…
Descriptors: Classification, Accuracy, Scores, Cutting Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Peabody, Michael R. – Applied Measurement in Education, 2020
The purpose of the current article is to introduce the equating and evaluation methods used in this special issue. Although a comprehensive review of all existing models and methodologies would be impractical given the format, a brief introduction to some of the more popular models will be provided. A brief discussion of the conditions required…
Descriptors: Evaluation Methods, Equated Scores, Sample Size, Item Response Theory
Shear, Benjamin R.; Reardon, Sean F. – Journal of Educational and Behavioral Statistics, 2021
This article describes an extension to the use of heteroskedastic ordered probit (HETOP) models to estimate latent distributional parameters from grouped, ordered-categorical data by pooling across multiple waves of data. We illustrate the method with aggregate proficiency data reporting the number of students in schools or districts scoring in…
Descriptors: Statistical Analysis, Computation, Regression (Statistics), Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Bogaert, Jasper; Loh, Wen Wei; Rosseel, Yves – Educational and Psychological Measurement, 2023
Factor score regression (FSR) is widely used as a convenient alternative to traditional structural equation modeling (SEM) for assessing structural relations between latent variables. But when latent variables are simply replaced by factor scores, biases in the structural parameter estimates often have to be corrected, due to the measurement error…
Descriptors: Factor Analysis, Regression (Statistics), Structural Equation Models, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yixing; Thompson, Marilyn S. – Journal of Experimental Education, 2022
A simulation study was conducted to explore the impact of differential item functioning (DIF) on general factor difference estimation for bifactor, ordinal data. Common analysis misspecifications in which the generated bifactor data with DIF were fitted using models with equality constraints on noninvariant item parameters were compared under data…
Descriptors: Comparative Analysis, Item Analysis, Sample Size, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Goodman, Joshua T.; Dallas, Andrew D.; Fan, Fen – Applied Measurement in Education, 2020
Recent research has suggested that re-setting the standard for each administration of a small sample examination, in addition to the high cost, does not adequately maintain similar performance expectations year after year. Small-sample equating methods have shown promise with samples between 20 and 30. For groups that have fewer than 20 students,…
Descriptors: Equated Scores, Sample Size, Sampling, Weighted Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Koziol, Natalie A.; Goodrich, J. Marc; Yoon, HyeonJin – Educational and Psychological Measurement, 2022
Differential item functioning (DIF) is often used to examine validity evidence of alternate form test accommodations. Unfortunately, traditional approaches for evaluating DIF are prone to selection bias. This article proposes a novel DIF framework that capitalizes on regression discontinuity design analysis to control for selection bias. A…
Descriptors: Regression (Statistics), Item Analysis, Validity, Testing Accommodations
Ziying Li; A. Corinne Huggins-Manley; Walter L. Leite; M. David Miller; Eric A. Wright – Educational and Psychological Measurement, 2022
The unstructured multiple-attempt (MA) item response data in virtual learning environments (VLEs) are often from student-selected assessment data sets, which include missing data, single-attempt responses, multiple-attempt responses, and unknown growth ability across attempts, leading to a complex and complicated scenario for using this kind of…
Descriptors: Sequential Approach, Item Response Theory, Data, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Shear, Benjamin R.; Reardon, Sean F. – Stanford Center for Education Policy Analysis, 2019
This paper describes a method for pooling grouped, ordered-categorical data across multiple waves to improve small-sample heteroskedastic ordered probit (HETOP) estimates of latent distributional parameters. We illustrate the method with aggregate proficiency data reporting the number of students in schools or districts scoring in each of a small…
Descriptors: Computation, Scores, Statistical Distributions, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
White, Simon R.; Bonnett, Laura J. – Teaching Statistics: An International Journal for Teachers, 2019
The statistical concept of sampling is often given little direct attention, typically reduced to the mantra "take a random sample". This low resource and adaptable activity demonstrates sampling and explores issues that arise due to biased sampling.
Descriptors: Statistical Bias, Sampling, Statistical Analysis, Learning Activities
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5