Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 9 |
Descriptor
Correlation | 13 |
Error of Measurement | 13 |
Reliability | 6 |
Computation | 5 |
Scores | 5 |
Item Response Theory | 4 |
Factor Analysis | 3 |
Models | 3 |
Monte Carlo Methods | 3 |
Test Theory | 3 |
Achievement Gains | 2 |
More ▼ |
Source
Applied Psychological… | 13 |
Author
Allen, Nancy L. | 1 |
Alonso, Ariel | 1 |
Andrich, David | 1 |
Ankenmann, Robert D. | 1 |
De Ayala, R. J. | 1 |
Dunbar, Stephen B. | 1 |
Fearing, Benjamin K. | 1 |
Ferdous, Abdullah A. | 1 |
Finch, Holmes | 1 |
Humphreys, Lloyd G. | 1 |
Kim, Doyoung | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 6 |
Reports - Evaluative | 5 |
Book/Product Reviews | 2 |
Reports - Descriptive | 2 |
Education Level
Elementary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Monahan, Patrick O.; Ankenmann, Robert D. – Applied Psychological Measurement, 2010
When the matching score is either less than perfectly reliable or not a sufficient statistic for determining latent proficiency in data conforming to item response theory (IRT) models, Type I error (TIE) inflation may occur for the Mantel-Haenszel (MH) procedure or any differential item functioning (DIF) procedure that matches on summed-item…
Descriptors: Error of Measurement, Item Response Theory, Test Bias, Scores
Kim, Doyoung; De Ayala, R. J.; Ferdous, Abdullah A.; Nering, Michael L. – Applied Psychological Measurement, 2011
To realize the benefits of item response theory (IRT), one must have model-data fit. One facet of a model-data fit investigation involves assessing the tenability of the conditional item independence (CII) assumption. In this Monte Carlo study, the comparative performance of 10 indices for identifying conditional item dependence is assessed. The…
Descriptors: Item Response Theory, Monte Carlo Methods, Error of Measurement, Statistical Analysis
Andrich, David; Kreiner, Svend – Applied Psychological Measurement, 2010
Models of modern test theory imply statistical independence among responses, generally referred to as "local independence." One violation of local independence occurs when the response to one item governs the response to a subsequent item. Expanding on a formulation of this kind of violation as a process in the dichotomous Rasch model,…
Descriptors: Test Theory, Item Response Theory, Test Items, Correlation
Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert; Vangeneugden, Tony; Mallinckrodt, Craig H. – Applied Psychological Measurement, 2010
Longitudinal studies are permeating clinical trials in psychiatry. Therefore, it is of utmost importance to study the psychometric properties of rating scales, frequently used in these trials, within a longitudinal framework. However, intrasubject serial correlation and memory effects are problematic issues often encountered in longitudinal data.…
Descriptors: Psychiatry, Rating Scales, Memory, Psychometrics
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Rae, Gordon – Applied Psychological Measurement, 2006
When errors of measurement are positively correlated, coefficient alpha may overestimate the "true" reliability of a composite. To reduce this inflation bias, Komaroff (1997) has proposed an adjusted alpha coefficient, ak. This article shows that ak is only guaranteed to be a lower bound to reliability if the latter does not include correlated…
Descriptors: Correlation, Reliability, Error of Measurement, Evaluation Methods
Raju, Nambury S.; Lezotte, Daniel V.; Fearing, Benjamin K.; Oshima, T. C. – Applied Psychological Measurement, 2006
This note describes a procedure for estimating the range restriction component used in correcting correlations for unreliability and range restriction when an estimate of the reliability of a predictor is not readily available for the unrestricted sample. This procedure is illustrated with a few examples. (Contains 1 table.)
Descriptors: Correlation, Reliability, Predictor Variables, Error Correction
Kluge, Annette – Applied Psychological Measurement, 2008
The use of microworlds (MWs), or complex dynamic systems, in educational testing and personnel selection is hampered by systematic measurement errors because these new and innovative item formats are not adequately controlled for their difficulty. This empirical study introduces a way to operationalize an MW's difficulty and demonstrates the…
Descriptors: Personnel Selection, Self Efficacy, Educational Testing, Computer Uses in Education

Humphreys, Lloyd G. – Applied Psychological Measurement, 1996
The reliability of a gain is determined by the reliabilities of the components, the correlation between them, and their standard deviations. Reliability is not inherently low, but the components of gains in many investigations make low reliability likely and require caution in the use of gain scores. (SLD)
Descriptors: Achievement Gains, Change, Correlation, Error of Measurement

Williams, Richard H.; Zimmerman, Donald W. – Applied Psychological Measurement, 1996
The critiques by L. Collins and L. Humphreys in this issue illustrate problems with the use of gain scores. Collins' examples show that familiar formulas for the reliability of differences do not reflect the precision of measures of change. Additional examples demonstrate flaws in the conventional approach to reliability. (SLD)
Descriptors: Achievement Gains, Change, Correlation, Error of Measurement
Zinbarg, Richard E.; Yovel, Iftah; Revelle, William; McDonald, Roderick P. – Applied Psychological Measurement, 2006
The extent to which a scale score generalizes to a latent variable common to all of the scale's indicators is indexed by the scale's general factor saturation. Seven techniques for estimating this parameter--omega[hierarchical] (omega[subscript h])--are compared in a series of simulated data sets. Primary comparisons were based on 160 artificial…
Descriptors: Computation, Factor Analysis, Reliability, Correlation

Allen, Nancy L.; Dunbar, Stephen B. – Applied Psychological Measurement, 1990
The standard error (SE) of correlations adjusted for selection with commonly used formulas was investigated. The study provides large-sample approximations of SE using the Pearson-Lawley three-variable correction formula, examines the SE under specific conditions, and compares various estimates of SEs under direct and indirect selection. (TJH)
Descriptors: Computer Simulation, Correlation, Demography, Error of Measurement

Zegers, Frits E. – Applied Psychological Measurement, 1991
The degree of agreement between two raters rating several objects for a single characteristic can be expressed through an association coefficient, such as the Pearson product-moment correlation. How to select an appropriate association coefficient, and the desirable properties and uses of a class of such coefficients--the Euclidean…
Descriptors: Classification, Correlation, Data Interpretation, Equations (Mathematics)