Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 13 |
Descriptor
Error Patterns | 13 |
Simulation | 13 |
Evaluation Methods | 6 |
Monte Carlo Methods | 6 |
Statistical Analysis | 5 |
Correlation | 4 |
Item Response Theory | 4 |
Measurement Techniques | 4 |
Models | 4 |
Sampling | 4 |
Comparative Analysis | 3 |
More ▼ |
Source
Educational and Psychological… | 13 |
Author
Bishara, Anthony J. | 1 |
Choi, Seung W. | 1 |
Cohen, Allan S. | 1 |
Drasgow, Fritz | 1 |
Eckerly, Carol A. | 1 |
Gallitto, Elena | 1 |
Han, Suhwa | 1 |
Harring, Jeffrey R. | 1 |
Hittner, James B. | 1 |
Kiers, Henk A. L. | 1 |
Kim, Eun Sook | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 10 |
Reports - Evaluative | 2 |
Reports - Descriptive | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lee, Sooyong; Han, Suhwa; Choi, Seung W. – Educational and Psychological Measurement, 2022
Response data containing an excessive number of zeros are referred to as zero-inflated data. When differential item functioning (DIF) detection is of interest, zero-inflation can attenuate DIF effects in the total sample and lead to underdetection of DIF items. The current study presents a DIF detection procedure for response data with excess…
Descriptors: Test Bias, Monte Carlo Methods, Simulation, Models
Lee, Chansoon; Qian, Hong – Educational and Psychological Measurement, 2022
Using classical test theory and item response theory, this study applied sequential procedures to a real operational item pool in a variable-length computerized adaptive testing (CAT) to detect items whose security may be compromised. Moreover, this study proposed a hybrid threshold approach to improve the detection power of the sequential…
Descriptors: Computer Assisted Testing, Adaptive Testing, Licensing Examinations (Professions), Item Response Theory
Wollack, James A.; Cohen, Allan S.; Eckerly, Carol A. – Educational and Psychological Measurement, 2015
Test tampering, especially on tests for educational accountability, is an unfortunate reality, necessitating that the state (or its testing vendor) perform data forensic analyses, such as erasure analyses, to look for signs of possible malfeasance. Few statistical approaches exist for detecting fraudulent erasures, and those that do largely do not…
Descriptors: Tests, Cheating, Item Response Theory, Accountability
Leth-Steensen, Craig; Gallitto, Elena – Educational and Psychological Measurement, 2016
A large number of approaches have been proposed for estimating and testing the significance of indirect effects in mediation models. In this study, four sets of Monte Carlo simulations involving full latent variable structural equation models were run in order to contrast the effectiveness of the currently popular bias-corrected bootstrapping…
Descriptors: Mediation Theory, Structural Equation Models, Monte Carlo Methods, Simulation
Harring, Jeffrey R.; Weiss, Brandi A.; Li, Ming – Educational and Psychological Measurement, 2015
Several studies have stressed the importance of simultaneously estimating interaction and quadratic effects in multiple regression analyses, even if theory only suggests an interaction effect should be present. Specifically, past studies suggested that failing to simultaneously include quadratic effects when testing for interaction effects could…
Descriptors: Structural Equation Models, Statistical Analysis, Monte Carlo Methods, Computation
Bishara, Anthony J.; Hittner, James B. – Educational and Psychological Measurement, 2015
It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…
Descriptors: Research Methodology, Monte Carlo Methods, Correlation, Simulation
Liu, Min; Lin, Tsung-I – Educational and Psychological Measurement, 2014
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
Descriptors: Regression (Statistics), Evaluation Methods, Indexes, Models
Kim, Eun Sook; Yoon, Myeongsun; Lee, Taehun – Educational and Psychological Measurement, 2012
Multiple-indicators multiple-causes (MIMIC) modeling is often used to test a latent group mean difference while assuming the equivalence of factor loadings and intercepts over groups. However, this study demonstrated that MIMIC was insensitive to the presence of factor loading noninvariance, which implies that factor loading invariance should be…
Descriptors: Test Items, Simulation, Testing, Statistical Analysis
Tay, Louis; Drasgow, Fritz – Educational and Psychological Measurement, 2012
Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…
Descriptors: Test Length, Monte Carlo Methods, Goodness of Fit, Item Response Theory
Li, Ying; Rupp, Andre A. – Educational and Psychological Measurement, 2011
This study investigated the Type I error rate and power of the multivariate extension of the S - [chi][squared] statistic using unidimensional and multidimensional item response theory (UIRT and MIRT, respectively) models as well as full-information bifactor (FI-bifactor) models through simulation. Manipulated factors included test length, sample…
Descriptors: Test Length, Item Response Theory, Statistical Analysis, Error Patterns
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E.; ten Berge, Jos M. F. – Educational and Psychological Measurement, 2008
This study compares two confirmatory factor analysis methods on their ability to verify whether correct assignments of items to subtests are supported by the data. The confirmatory common factor (CCF) method is used most often and defines nonzero loadings so that they correspond to the assignment of items to subtests. Another method is the oblique…
Descriptors: Assignments, Simulation, Construct Validity, Factor Analysis
Wilcox, Rand R. – Educational and Psychological Measurement, 2006
For two random variables, X and Y, let D = X - Y, and let theta[subscript x], theta[subscript y], and theta[subscript d] be the corresponding medians. It is known that the Wilcoxon-Mann-Whitney test and its modern extensions do not test H[subscript o] : theta[subscript x] = theta[subscript y], but rather, they test H[subscript o] : theta[subscript…
Descriptors: Scores, Inferences, Comparative Analysis, Statistical Analysis
Kromrey, Jeffrey D.; Rendina-Gobioff, Gianna – Educational and Psychological Measurement, 2006
The performance of methods for detecting publication bias in meta-analysis was evaluated using Monte Carlo methods. Four methods of bias detection were investigated: Begg's rank correlation, Egger's regression, funnel plot regression, and trim and fill. Five factors were included in the simulation design: number of primary studies in each…
Descriptors: Comparative Analysis, Meta Analysis, Monte Carlo Methods, Correlation