Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 6 |
Descriptor
Computer Software | 8 |
Error of Measurement | 8 |
Models | 8 |
Sample Size | 4 |
Computation | 3 |
Item Response Theory | 3 |
Monte Carlo Methods | 3 |
Bayesian Statistics | 2 |
Correlation | 2 |
Evaluation Methods | 2 |
Factor Analysis | 2 |
More ▼ |
Source
Educational and Psychological… | 4 |
Applied Psychological… | 1 |
International Journal of… | 1 |
Journal of Experimental… | 1 |
Sociological Methods &… | 1 |
Author
Publication Type
Journal Articles | 8 |
Reports - Research | 6 |
Reports - Descriptive | 2 |
Education Level
Elementary Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Longitudinal… | 1 |
Schools and Staffing Survey… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Raykov, Tenko; DiStefano, Christine; Calvocoressi, Lisa; Volker, Martin – Educational and Psychological Measurement, 2022
A class of effect size indices are discussed that evaluate the degree to which two nested confirmatory factor analysis models differ from each other in terms of fit to a set of observed variables. These descriptive effect measures can be used to quantify the impact of parameter restrictions imposed in an initially considered model and are free…
Descriptors: Effect Size, Models, Measurement Techniques, Factor Analysis
McNeish, Daniel – Educational and Psychological Measurement, 2017
In behavioral sciences broadly, estimating growth models with Bayesian methods is becoming increasingly common, especially to combat small samples common with longitudinal data. Although Mplus is becoming an increasingly common program for applied research employing Bayesian methods, the limited selection of prior distributions for the elements of…
Descriptors: Models, Bayesian Statistics, Statistical Analysis, Computer Software
Schoeneberger, Jason A. – Journal of Experimental Education, 2016
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Descriptors: Sample Size, Models, Computation, Predictor Variables
Stapleton, Laura M.; Kang, Yoonjeong – Sociological Methods & Research, 2018
This research empirically evaluates data sets from the National Center for Education Statistics (NCES) for design effects of ignoring the sampling design in weighted two-level analyses. Currently, researchers may ignore the sampling design beyond the levels that they model which might result in incorrect inferences regarding hypotheses due to…
Descriptors: Probability, Hierarchical Linear Modeling, Sampling, Inferences
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Kim, Seock-Ho – Educational and Psychological Measurement, 2007
The procedures required to obtain the approximate posterior standard deviations of the parameters in the three commonly used item response models for dichotomous items are described and used to generate values for some common situations. The results were compared with those obtained from maximum likelihood estimation. It is shown that the use of…
Descriptors: Item Response Theory, Computation, Comparative Analysis, Evaluation Methods
DeMars, Christine E. – Educational and Psychological Measurement, 2005
Type I error rates for PARSCALE's fit statistic were examined. Data were generated to fit the partial credit or graded response model, with test lengths of 10 or 20 items. The ability distribution was simulated to be either normal or uniform. Type I error rates were inflated for the shorter test length and, for the graded-response model, also for…
Descriptors: Test Length, Item Response Theory, Psychometrics, Error of Measurement
Dirkzwager, Arie – International Journal of Testing, 2003
The crux in psychometrics is how to estimate the probability that a respondent answers an item correctly on one occasion out of many. Under the current testing paradigm this probability is estimated using all kinds of statistical techniques and mathematical modeling. Multiple evaluation is a new testing paradigm using the person's own personal…
Descriptors: Psychometrics, Probability, Models, Measurement