NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Culpepper, Steven Andrew – Applied Psychological Measurement, 2013
A classic topic in the fields of psychometrics and measurement has been the impact of the number of scale categories on test score reliability. This study builds on previous research by further articulating the relationship between item response theory (IRT) and classical test theory (CTT). Equations are presented for comparing the reliability and…
Descriptors: Item Response Theory, Reliability, Scores, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, Andre – Applied Psychological Measurement, 2013
The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…
Descriptors: Factor Analysis, Predictor Variables, Reliability, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Culpepper, Steven Andrew – Applied Psychological Measurement, 2012
Measurement error significantly biases interaction effects and distorts researchers' inferences regarding interactive hypotheses. This article focuses on the single-indicator case and shows how to accurately estimate group slope differences by disattenuating interaction effects with errors-in-variables (EIV) regression. New analytic findings were…
Descriptors: Evidence, Test Length, Interaction, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert; Vangeneugden, Tony; Mallinckrodt, Craig H. – Applied Psychological Measurement, 2010
Longitudinal studies are permeating clinical trials in psychiatry. Therefore, it is of utmost importance to study the psychometric properties of rating scales, frequently used in these trials, within a longitudinal framework. However, intrasubject serial correlation and memory effects are problematic issues often encountered in longitudinal data.…
Descriptors: Psychiatry, Rating Scales, Memory, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Raju, Nambury S.; Price, Larry R.; Oshima, T. C.; Nering, Michael L. – Applied Psychological Measurement, 2007
An examinee-level (or conditional) reliability is proposed for use in both classical test theory (CTT) and item response theory (IRT). The well-known group-level reliability is shown to be the average of conditional reliabilities of examinees in a group or a population. This relationship is similar to the known relationship between the square of…
Descriptors: Item Response Theory, Error of Measurement, Reliability, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Rae, Gordon – Applied Psychological Measurement, 2006
When errors of measurement are positively correlated, coefficient alpha may overestimate the "true" reliability of a composite. To reduce this inflation bias, Komaroff (1997) has proposed an adjusted alpha coefficient, ak. This article shows that ak is only guaranteed to be a lower bound to reliability if the latter does not include correlated…
Descriptors: Correlation, Reliability, Error of Measurement, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Raju, Nambury S.; Lezotte, Daniel V.; Fearing, Benjamin K.; Oshima, T. C. – Applied Psychological Measurement, 2006
This note describes a procedure for estimating the range restriction component used in correcting correlations for unreliability and range restriction when an estimate of the reliability of a predictor is not readily available for the unrestricted sample. This procedure is illustrated with a few examples. (Contains 1 table.)
Descriptors: Correlation, Reliability, Predictor Variables, Error Correction
Peer reviewed Peer reviewed
Raykov, Tenko – Applied Psychological Measurement, 1998
Proposes a method for obtaining standard errors and confidence intervals of composite reliability coefficients based on bootstrap methods and using a structural-equation-modeling framework for estimating the composite reliability of congeneric measures (T. Raykov, 1997). Demonstrates the approach with simulated data. (SLD)
Descriptors: Error of Measurement, Estimation (Mathematics), Reliability, Simulation
Peer reviewed Peer reviewed
Ogasawara, Haruhiko – Applied Psychological Measurement, 2002
Obtained asymptotic standard errors of item, test, and score information function estimates, and used numerical illustrations to show that the response function estimates are rather stable in spite of the unstable parameter estimates. However, information function estimates are shown to be relatively unstable. (SLD)
Descriptors: Error of Measurement, Estimation (Mathematics), Item Response Theory, Reliability
Peer reviewed Peer reviewed
Samejima, Fumiko – Applied Psychological Measurement, 1994
The reliability coefficient is predicted from the test information function (TIF) or two modified TIF formulas and a specific trait distribution. Examples illustrate the variability of the reliability coefficient across different trait distributions, and results are compared with empirical reliability coefficients. (SLD)
Descriptors: Adaptive Testing, Error of Measurement, Estimation (Mathematics), Reliability
Peer reviewed Peer reviewed
Humphreys, Lloyd G. – Applied Psychological Measurement, 1996
The reliability of a gain is determined by the reliabilities of the components, the correlation between them, and their standard deviations. Reliability is not inherently low, but the components of gains in many investigations make low reliability likely and require caution in the use of gain scores. (SLD)
Descriptors: Achievement Gains, Change, Correlation, Error of Measurement
Peer reviewed Peer reviewed
Williams, Richard H.; Zimmerman, Donald W. – Applied Psychological Measurement, 1996
The critiques by L. Collins and L. Humphreys in this issue illustrate problems with the use of gain scores. Collins' examples show that familiar formulas for the reliability of differences do not reflect the precision of measures of change. Additional examples demonstrate flaws in the conventional approach to reliability. (SLD)
Descriptors: Achievement Gains, Change, Correlation, Error of Measurement
Peer reviewed Peer reviewed
Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note shows that, under conditions specified by Levin and Subkoviak (TM 503 420), it is not necessary to specify the reliabilities of observed scores when comparing completely randomized designs with randomized block designs. Certain errors in their illustrative example are also discussed. (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Levin, Joel R.; Subkoviak, Michael J. – Applied Psychological Measurement, 1978
Comments (TM 503 706) on an earlier article (TM 503 420) concerning the comparison of the completely randomized design and the randomized block design are acknowledged and appreciated. In addition, potentially misleading notions arising from these comments are addressed and clarified. (See also TM 503 708). (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note continues the discussion of earlier articles (TM 503 420, TM 503 706, and TM 503 707), comparing the completely randomized design with the randomized block design. (CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Previous Page | Next Page ยป
Pages: 1  |  2