NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; DiStefano, Christine; Calvocoressi, Lisa; Volker, Martin – Educational and Psychological Measurement, 2022
A class of effect size indices are discussed that evaluate the degree to which two nested confirmatory factor analysis models differ from each other in terms of fit to a set of observed variables. These descriptive effect measures can be used to quantify the impact of parameter restrictions imposed in an initially considered model and are free…
Descriptors: Effect Size, Models, Measurement Techniques, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
McNeish, Daniel – Educational and Psychological Measurement, 2017
In behavioral sciences broadly, estimating growth models with Bayesian methods is becoming increasingly common, especially to combat small samples common with longitudinal data. Although Mplus is becoming an increasingly common program for applied research employing Bayesian methods, the limited selection of prior distributions for the elements of…
Descriptors: Models, Bayesian Statistics, Statistical Analysis, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Schoeneberger, Jason A. – Journal of Experimental Education, 2016
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Descriptors: Sample Size, Models, Computation, Predictor Variables
Stapleton, Laura M.; Kang, Yoonjeong – Sociological Methods & Research, 2018
This research empirically evaluates data sets from the National Center for Education Statistics (NCES) for design effects of ignoring the sampling design in weighted two-level analyses. Currently, researchers may ignore the sampling design beyond the levels that they model which might result in incorrect inferences regarding hypotheses due to…
Descriptors: Probability, Hierarchical Linear Modeling, Sampling, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Seock-Ho – Educational and Psychological Measurement, 2007
The procedures required to obtain the approximate posterior standard deviations of the parameters in the three commonly used item response models for dichotomous items are described and used to generate values for some common situations. The results were compared with those obtained from maximum likelihood estimation. It is shown that the use of…
Descriptors: Item Response Theory, Computation, Comparative Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational and Psychological Measurement, 2005
Type I error rates for PARSCALE's fit statistic were examined. Data were generated to fit the partial credit or graded response model, with test lengths of 10 or 20 items. The ability distribution was simulated to be either normal or uniform. Type I error rates were inflated for the shorter test length and, for the graded-response model, also for…
Descriptors: Test Length, Item Response Theory, Psychometrics, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Dirkzwager, Arie – International Journal of Testing, 2003
The crux in psychometrics is how to estimate the probability that a respondent answers an item correctly on one occasion out of many. Under the current testing paradigm this probability is estimated using all kinds of statistical techniques and mathematical modeling. Multiple evaluation is a new testing paradigm using the person's own personal…
Descriptors: Psychometrics, Probability, Models, Measurement