NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Grade 41
Audience
Laws, Policies, & Programs
Assessments and Surveys
Trends in International…1
What Works Clearinghouse Rating
Showing 1 to 15 of 43 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sen, Sedat; Cohen, Allan S. – Educational and Psychological Measurement, 2023
The purpose of this study was to examine the effects of different data conditions on item parameter recovery and classification accuracy of three dichotomous mixture item response theory (IRT) models: the Mix1PL, Mix2PL, and Mix3PL. Manipulated factors in the simulation included the sample size (11 different sample sizes from 100 to 5000), test…
Descriptors: Sample Size, Item Response Theory, Accuracy, Classification
Wang, Qian – ProQuest LLC, 2022
Over the last four decades, meta-analysis has proven to be a vital analysis strategy in educational research for synthesizing research findings from different studies. When synthesizing studies in a meta-analysis, it is common to assume that the true underlying effect varies from study to study, as studies will differ in design, participants,…
Descriptors: Meta Analysis, Educational Research, Maximum Likelihood Statistics, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Bang Quan Zheng; Peter M. Bentler – Structural Equation Modeling: A Multidisciplinary Journal, 2022
Chi-square tests based on maximum likelihood (ML) estimation of covariance structures often incorrectly over-reject the null hypothesis: [sigma] = [sigma(theta)] when the sample size is small. Reweighted least squares (RLS) avoids this problem. In some models, the vector of parameter must contain means, variances, and covariances, yet whether RLS…
Descriptors: Maximum Likelihood Statistics, Structural Equation Models, Goodness of Fit, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Xiaying Zheng; Ji Seung Yang; Jeffrey R. Harring – Structural Equation Modeling: A Multidisciplinary Journal, 2022
Measuring change in an educational or psychological construct over time is often achieved by repeatedly administering the same items to the same examinees over time and fitting a second-order latent growth curve model. However, latent growth modeling with full information maximum likelihood (FIML) estimation becomes computationally challenging…
Descriptors: Longitudinal Studies, Data Analysis, Item Response Theory, Structural Equation Models
Peer reviewed Peer reviewed
Direct linkDirect link
Jobst, Lisa J.; Auerswald, Max; Moshagen, Morten – Educational and Psychological Measurement, 2022
Prior studies investigating the effects of non-normality in structural equation modeling typically induced non-normality in the indicator variables. This procedure neglects the factor analytic structure of the data, which is defined as the sum of latent variables and errors, so it is unclear whether previous results hold if the source of…
Descriptors: Goodness of Fit, Structural Equation Models, Error of Measurement, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Babcock, Ben; Hodge, Kari J. – Educational and Psychological Measurement, 2020
Equating and scaling in the context of small sample exams, such as credentialing exams for highly specialized professions, has received increased attention in recent research. Investigators have proposed a variety of both classical and Rasch-based approaches to the problem. This study attempts to extend past research by (1) directly comparing…
Descriptors: Item Response Theory, Equated Scores, Scaling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Bolin, Jocelyn H.; Finch, W. Holmes; Stenger, Rachel – Educational and Psychological Measurement, 2019
Multilevel data are a reality for many disciplines. Currently, although multiple options exist for the treatment of multilevel data, most disciplines strictly adhere to one method for multilevel data regardless of the specific research design circumstances. The purpose of this Monte Carlo simulation study is to compare several methods for the…
Descriptors: Hierarchical Linear Modeling, Computation, Statistical Analysis, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yang; Yang, Ji Seung – Journal of Educational and Behavioral Statistics, 2018
The uncertainty arising from item parameter estimation is often not negligible and must be accounted for when calculating latent variable (LV) scores in item response theory (IRT). It is particularly so when the calibration sample size is limited and/or the calibration IRT model is complex. In the current work, we treat two-stage IRT scoring as a…
Descriptors: Intervals, Scores, Item Response Theory, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Soo; Suh, Youngsuk – Journal of Educational Measurement, 2018
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
Descriptors: Item Response Theory, Sample Size, Models, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Jian; Lomax, Richard G. – Journal of Experimental Education, 2017
Using Monte Carlo simulations, this research examined the performance of four missing data methods in SEM under different multivariate distributional conditions. The effects of four independent variables (sample size, missing proportion, distribution shape, and factor loading magnitude) were investigated on six outcome variables: convergence rate,…
Descriptors: Monte Carlo Methods, Structural Equation Models, Evaluation Methods, Measurement Techniques
Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun – Grantee Submission, 2017
The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…
Descriptors: Statistical Analysis, Evaluation Methods, Structural Equation Models, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Asún, Rodrigo A.; Rdz-Navarro, Karina; Alvarado, Jesús M. – Sociological Methods & Research, 2016
This study compares the performance of two approaches in analysing four-point Likert rating scales with a factorial model: the classical factor analysis (FA) and the item factor analysis (IFA). For FA, maximum likelihood and weighted least squares estimations using Pearson correlation matrices among items are compared. For IFA, diagonally weighted…
Descriptors: Likert Scales, Item Analysis, Factor Analysis, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pfaffel, Andreas; Spiel, Christiane – Practical Assessment, Research & Evaluation, 2016
Approaches to correcting correlation coefficients for range restriction have been developed under the framework of large sample theory. The accuracy of missing data techniques for correcting correlation coefficients for range restriction has thus far only been investigated with relatively large samples. However, researchers and evaluators are…
Descriptors: Correlation, Sample Size, Error of Measurement, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Sen, Sedat – International Journal of Testing, 2018
Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…
Descriptors: Item Response Theory, Comparative Analysis, Computation, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Alper; Weiss, David J. – Educational Sciences: Theory and Practice, 2015
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Sample Size, Item Banks
Previous Page | Next Page »
Pages: 1  |  2  |  3