NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ellis, Jules L. – Educational and Psychological Measurement, 2021
This study develops a theoretical model for the costs of an exam as a function of its duration. Two kind of costs are distinguished: (1) the costs of measurement errors and (2) the costs of the measurement. Both costs are expressed in time of the student. Based on a classical test theory model, enriched with assumptions on the context, the costs…
Descriptors: Test Length, Models, Error of Measurement, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A.; Li, Tenglong – Educational and Psychological Measurement, 2018
This note extends the results in the 2016 article by Raykov, Marcoulides, and Li to the case of correlated errors in a set of observed measures subjected to principal component analysis. It is shown that when at least two measures are fallible, the probability is zero for any principal component--and in particular for the first principal…
Descriptors: Factor Analysis, Error of Measurement, Correlation, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Conger, Anthony J. – Educational and Psychological Measurement, 2017
Drawing parallels to classical test theory, this article clarifies the difference between rater accuracy and reliability and demonstrates how category marginal frequencies affect rater agreement and Cohen's kappa. Category assignment paradigms are developed: comparing raters to a standard (index) versus comparing two raters to one another…
Descriptors: Interrater Reliability, Evaluators, Accuracy, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Trafimow, David – Educational and Psychological Measurement, 2017
There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a…
Descriptors: Statistical Inference, Hypothesis Testing, Probability, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko – Educational and Psychological Measurement, 2012
A latent variable modeling approach that permits estimation of propensity scores in observational studies containing fallible independent variables is outlined, with subsequent examination of treatment effect. When at least one covariate is measured with error, it is indicated that the conventional propensity score need not possess the desirable…
Descriptors: Computation, Probability, Error of Measurement, Observation
Peer reviewed Peer reviewed
Direct linkDirect link
Köhler, Carmen; Pohl, Steffi; Carstensen, Claus H. – Educational and Psychological Measurement, 2015
When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically…
Descriptors: Competence, Tests, Evaluation Methods, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HwaYoung; Beretvas, S. Natasha – Educational and Psychological Measurement, 2014
Conventional differential item functioning (DIF) detection methods (e.g., the Mantel-Haenszel test) can be used to detect DIF only across observed groups, such as gender or ethnicity. However, research has found that DIF is not typically fully explained by an observed variable. True sources of DIF may include unobserved, latent variables, such as…
Descriptors: Item Analysis, Factor Structure, Bayesian Statistics, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Le, Huy; Marcus, Justin – Educational and Psychological Measurement, 2012
This study used Monte Carlo simulation to examine the properties of the overall odds ratio (OOR), which was recently introduced as an index for overall effect size in multiple logistic regression. It was found that the OOR was relatively independent of study base rate and performed better than most commonly used R-square analogs in indexing model…
Descriptors: Monte Carlo Methods, Probability, Mathematical Concepts, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Sueiro, Manuel J.; Abad, Francisco J. – Educational and Psychological Measurement, 2011
The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…
Descriptors: Goodness of Fit, Item Response Theory, Nonparametric Statistics, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Liu, Chih-Yu – Educational and Psychological Measurement, 2007
In this study, the authors develop a generalized multilevel facets model, which is not only a multilevel and two-parameter generalization of the facets model, but also a multilevel and facet generalization of the generalized partial credit model. Because the new model is formulated within a framework of nonlinear mixed models, no efforts are…
Descriptors: Generalization, Item Response Theory, Models, Equipment
Peer reviewed Peer reviewed
Zimmerman, Donald W. – Educational and Psychological Measurement, 1985
A computer program simulated guessing on multiple-choice test items and calculated deviation IQ's from observed scores which contained a guessing component. Extensive variability in deviation IQ's due entirely to chance was found. (Author/LMO)
Descriptors: Computer Simulation, Error of Measurement, Guessing (Tests), Intelligence Quotient