NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dardick, William R.; Mislevy, Robert J. – Educational and Psychological Measurement, 2016
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Descriptors: Bayesian Statistics, Probability, Data Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H. – Educational and Psychological Measurement, 2015
When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically…
Descriptors: Competence, Tests, Evaluation Methods, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HwaYoung; Beretvas, S. Natasha – Educational and Psychological Measurement, 2014
Conventional differential item functioning (DIF) detection methods (e.g., the Mantel-Haenszel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable. True sources of DIF may include unobserved, latent variables, such as…
Descriptors: Item Analysis, Factor Structure, Bayesian Statistics, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Sueiro, Manuel J.; Abad, Francisco J. – Educational and Psychological Measurement, 2011
The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…
Descriptors: Goodness of Fit, Item Response Theory, Nonparametric Statistics, Probability
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1983
When comparing k normal populations an investigator might want to know the probability that the population with the largest population mean will have the largest sample mean. This paper describes and illustrates methods of approximating this probability when the variances are unknown and possibly unequal. (Author/BW)
Descriptors: Data Analysis, Hypothesis Testing, Mathematical Formulas, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, Andre A.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2006
One theoretical feature that makes item response theory (IRT) models those of choice for many psychometric data analysts is parameter invariance, the equality of item and examinee parameters from different examinee populations or measurement conditions. In this article, using the well-known fact that item and examinee parameters are identical only…
Descriptors: Psychometrics, Probability, Simulation, Item Response Theory
Peer reviewed Peer reviewed
Meyer, Lennart – Educational and Psychological Measurement, 1979
The PM statistical index, which indicates the probability that a person will belong to a particular clinical class, is described. The coefficient is similar to the G index but is easier to compute. An empirical example is presented. (JKS)
Descriptors: Adults, Clinical Diagnosis, Data Analysis, Hypothesis Testing
Peer reviewed Peer reviewed
Edgington, Eugene S.; Haller, Otto – Educational and Psychological Measurement, 1984
This paper explains how to combine probabilities from discrete distributions, such as probability distributions for nonparametric tests. (Author/BW)
Descriptors: Computer Software, Data Analysis, Hypothesis Testing, Mathematical Formulas
Peer reviewed Peer reviewed
Aiken, Lewis R. – Educational and Psychological Measurement, 1985
Three numerical coefficients for analyzing the validity and reliability of ratings are described. Each coefficient is computed as the ratio of an obtained to a maximum sum of differences in ratings. The coefficients are also applicable to the item analysis, agreement analysis, and cluster or factor analysis of rating-scale data. (Author/BW)
Descriptors: Computer Software, Data Analysis, Factor Analysis, Item Analysis