NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wollack, James A.; Cohen, Allan S.; Eckerly, Carol A. – Educational and Psychological Measurement, 2015
Test tampering, especially on tests for educational accountability, is an unfortunate reality, necessitating that the state (or its testing vendor) perform data forensic analyses, such as erasure analyses, to look for signs of possible malfeasance. Few statistical approaches exist for detecting fraudulent erasures, and those that do largely do not…
Descriptors: Tests, Cheating, Item Response Theory, Accountability
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Min; Lin, Tsung-I – Educational and Psychological Measurement, 2014
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
Descriptors: Regression (Statistics), Evaluation Methods, Indexes, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Socha, Alan; DeMars, Christine E. – Educational and Psychological Measurement, 2013
Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…
Descriptors: Sample Size, Test Length, Correlation, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Eun Sook; Yoon, Myeongsun; Lee, Taehun – Educational and Psychological Measurement, 2012
Multiple-indicators multiple-causes (MIMIC) modeling is often used to test a latent group mean difference while assuming the equivalence of factor loadings and intercepts over groups. However, this study demonstrated that MIMIC was insensitive to the presence of factor loading noninvariance, which implies that factor loading invariance should be…
Descriptors: Test Items, Simulation, Testing, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Tay, Louis; Drasgow, Fritz – Educational and Psychological Measurement, 2012
Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…
Descriptors: Test Length, Monte Carlo Methods, Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Holden, Jocelyn E.; Kelley, Ken – Educational and Psychological Measurement, 2010
Classification procedures are common and useful in behavioral, educational, social, and managerial research. Supervised classification techniques such as discriminant function analysis assume training data are perfectly classified when estimating parameters or classifying. In contrast, unsupervised classification techniques such as finite mixture…
Descriptors: Discriminant Analysis, Classification, Computation, Behavioral Science Research
Peer reviewed Peer reviewed
Direct linkDirect link
Yoo, Jin Eun – Educational and Psychological Measurement, 2009
This Monte Carlo study investigates the beneficiary effect of including auxiliary variables during estimation of confirmatory factor analysis models with multiple imputation. Specifically, it examines the influence of sample size, missing rates, missingness mechanism combinations, missingness types (linear or convex), and the absence or presence…
Descriptors: Monte Carlo Methods, Research Methodology, Test Validity, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Fidalgo, Angel M.; Ferreres, Doris; Muniz, Jose – Educational and Psychological Measurement, 2004
Sample-size restrictions limit the contingency table approaches based on asymptotic distributions, such as the Mantel-Haenszel (MH) procedure, for detecting differential item functioning (DIF) in many practical applications. Within this framework, the present study investigated the power and Type I error performance of empirical and inferential…
Descriptors: Test Bias, Evaluation Methods, Sample Size, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Wilcox, Rand R. – Educational and Psychological Measurement, 2006
For two random variables, X and Y, let D = X - Y, and let theta[subscript x], theta[subscript y], and theta[subscript d] be the corresponding medians. It is known that the Wilcoxon-Mann-Whitney test and its modern extensions do not test H[subscript o] : theta[subscript x] = theta[subscript y], but rather, they test H[subscript o] : theta[subscript…
Descriptors: Scores, Inferences, Comparative Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kromrey, Jeffrey D.; Rendina-Gobioff, Gianna – Educational and Psychological Measurement, 2006
The performance of methods for detecting publication bias in meta-analysis was evaluated using Monte Carlo methods. Four methods of bias detection were investigated: Begg's rank correlation, Egger's regression, funnel plot regression, and trim and fill. Five factors were included in the simulation design: number of primary studies in each…
Descriptors: Comparative Analysis, Meta Analysis, Monte Carlo Methods, Correlation