Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 12 |
Since 2006 (last 20 years) | 24 |
Descriptor
Simulation | 30 |
Data Analysis | 19 |
Error of Measurement | 9 |
Sample Size | 9 |
Data | 8 |
Factor Analysis | 7 |
Item Response Theory | 7 |
Computation | 6 |
Evaluation Methods | 6 |
Monte Carlo Methods | 6 |
Statistical Analysis | 6 |
More ▼ |
Source
Educational and Psychological… | 30 |
Author
Publication Type
Journal Articles | 29 |
Reports - Research | 22 |
Reports - Evaluative | 7 |
Education Level
Elementary Secondary Education | 2 |
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 3 | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Grade 6 | 1 |
Intermediate Grades | 1 |
Middle Schools | 1 |
Primary Education | 1 |
Audience
Location
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 2 |
Assessments and Surveys
Raven Advanced Progressive… | 1 |
What Works Clearinghouse Rating
Yan Xia; Selim Havan – Educational and Psychological Measurement, 2024
Although parallel analysis has been found to be an accurate method for determining the number of factors in many conditions with complete data, its application under missing data is limited. The existing literature recommends that, after using an appropriate multiple imputation method, researchers either apply parallel analysis to every imputed…
Descriptors: Data Interpretation, Factor Analysis, Statistical Inference, Research Problems
Ziying Li; A. Corinne Huggins-Manley; Walter L. Leite; M. David Miller; Eric A. Wright – Educational and Psychological Measurement, 2022
The unstructured multiple-attempt (MA) item response data in virtual learning environments (VLEs) are often from student-selected assessment data sets, which include missing data, single-attempt responses, multiple-attempt responses, and unknown growth ability across attempts, leading to a complex and complicated scenario for using this kind of…
Descriptors: Sequential Approach, Item Response Theory, Data, Simulation
Foster, Robert C. – Educational and Psychological Measurement, 2021
This article presents some equivalent forms of the common Kuder-Richardson Formula 21 and 20 estimators for nondichotomous data belonging to certain other exponential families, such as Poisson count data, exponential data, or geometric counts of trials until failure. Using the generalized framework of Foster (2020), an equation for the reliability…
Descriptors: Test Reliability, Data, Computation, Mathematical Formulas
Cosemans, Tim; Rosseel, Yves; Gelper, Sarah – Educational and Psychological Measurement, 2022
Exploratory graph analysis (EGA) is a commonly applied technique intended to help social scientists discover latent variables. Yet, the results can be influenced by the methodological decisions the researcher makes along the way. In this article, we focus on the choice regarding the number of factors to retain: We compare the performance of the…
Descriptors: Social Science Research, Research Methodology, Graphs, Factor Analysis
Goretzko, David – Educational and Psychological Measurement, 2022
Determining the number of factors in exploratory factor analysis is arguably the most crucial decision a researcher faces when conducting the analysis. While several simulation studies exist that compare various so-called factor retention criteria under different data conditions, little is known about the impact of missing data on this process.…
Descriptors: Factor Analysis, Research Problems, Data, Prediction
Cheng, Ying; Shao, Can – Educational and Psychological Measurement, 2022
Computer-based and web-based testing have become increasingly popular in recent years. Their popularity has dramatically expanded the availability of response time data. Compared to the conventional item response data that are often dichotomous or polytomous, response time has the advantage of being continuous and can be collected in an…
Descriptors: Reaction Time, Test Wiseness, Computer Assisted Testing, Simulation
Fujimoto, Ken A. – Educational and Psychological Measurement, 2019
Advancements in item response theory (IRT) have led to models for dual dependence, which control for cluster and method effects during a psychometric analysis. Currently, however, this class of models does not include one that controls for when the method effects stem from two method sources in which one source functions differently across the…
Descriptors: Bayesian Statistics, Item Response Theory, Psychometrics, Models
McNeish, Daniel; Harring, Jeffrey R. – Educational and Psychological Measurement, 2017
To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…
Descriptors: Growth Models, Goodness of Fit, Error Correction, Sampling
Park, Jungkyu; Yu, Hsiu-Ting – Educational and Psychological Measurement, 2016
The multilevel latent class model (MLCM) is a multilevel extension of a latent class model (LCM) that is used to analyze nested structure data structure. The nonparametric version of an MLCM assumes a discrete latent variable at a higher-level nesting structure to account for the dependency among observations nested within a higher-level unit. In…
Descriptors: Hierarchical Linear Modeling, Nonparametric Statistics, Data Analysis, Simulation
Is the Factor Observed in Investigations on the Item-Position Effect Actually the Difficulty Factor?
Schweizer, Karl; Troche, Stefan – Educational and Psychological Measurement, 2018
In confirmatory factor analysis quite similar models of measurement serve the detection of the difficulty factor and the factor due to the item-position effect. The item-position effect refers to the increasing dependency among the responses to successively presented items of a test whereas the difficulty factor is ascribed to the wide range of…
Descriptors: Investigations, Difficulty Level, Factor Analysis, Models
Marland, Joshua; Harrick, Matthew; Sireci, Stephen G. – Educational and Psychological Measurement, 2020
Student assessment nonparticipation (or opt out) has increased substantially in K-12 schools in states across the country. This increase in opt out has the potential to impact achievement and growth (or value-added) measures used for educator and institutional accountability. In this simulation study, we investigated the extent to which…
Descriptors: Value Added Models, Teacher Effectiveness, Teacher Evaluation, Elementary Secondary Education
Olvera Astivia, Oscar L.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2015
To further understand the properties of data-generation algorithms for multivariate, nonnormal data, two Monte Carlo simulation studies comparing the Vale and Maurelli method and the Headrick fifth-order polynomial method were implemented. Combinations of skewness and kurtosis found in four published articles were run and attention was…
Descriptors: Data, Simulation, Monte Carlo Methods, Comparative Analysis
Dardick, William R.; Mislevy, Robert J. – Educational and Psychological Measurement, 2016
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Descriptors: Bayesian Statistics, Probability, Data Analysis, Item Response Theory
Liu, Yan; Zumbo, Bruno D.; Wu, Amery D. – Educational and Psychological Measurement, 2012
Previous studies have rarely examined the impact of outliers on the decisions about the number of factors to extract in an exploratory factor analysis. The few studies that have investigated this issue have arrived at contradictory conclusions regarding whether outliers inflated or deflated the number of factors extracted. By systematically…
Descriptors: Factor Analysis, Data, Simulation, Monte Carlo Methods
Liu, Min; Lin, Tsung-I – Educational and Psychological Measurement, 2014
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
Descriptors: Regression (Statistics), Evaluation Methods, Indexes, Models
Previous Page | Next Page ยป
Pages: 1 | 2