NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, Andre – Applied Psychological Measurement, 2013
The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…
Descriptors: Factor Analysis, Predictor Variables, Reliability, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Waller, Niels G. – Applied Psychological Measurement, 2008
Reliability is a property of test scores from individuals who have been sampled from a well-defined population. Reliability indices, such as coefficient and related formulas for internal consistency reliability (KR-20, Hoyt's reliability), yield lower bound reliability estimates when (a) subjects have been sampled from a single population and when…
Descriptors: Test Items, Reliability, Scores, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Lucke, Joseph F. – Applied Psychological Measurement, 2005
The properties of internal consistency (alpha), classical reliability (rho), and congeneric reliability (omega) for a composite test with correlated item error are analytically investigated. Possible sources of correlated item error are contextual effects, item bundles, and item models that ignore additional attributes or higher-order attributes.…
Descriptors: Reliability, Statistical Analysis
Peer reviewed Peer reviewed
Alsawalmeh, Yousef M.; Feldt, Leonard S. – Applied Psychological Measurement, 1994
An approximate statistical test of the equality of two intraclass reliability coefficients based on the same sample of people is derived. Such a test is needed when a researcher wishes to compare the reliability of two measurement procedures, and both procedures can be applied to results from the same group. (SLD)
Descriptors: Comparative Analysis, Measurement Techniques, Reliability, Sampling
Peer reviewed Peer reviewed
Millsap, Roger E. – Applied Psychological Measurement, 1988
Two new methods for constructing a credibility interval (CI)--an interval containing a specified proportion of true validity description--are discussed, from a frequentist perspective. Tolerance intervals, unlike the current method of constructing the CI, have performance characteristics across repeated applications and may be useful in validity…
Descriptors: Bayesian Statistics, Meta Analysis, Statistical Analysis, Test Reliability
Peer reviewed Peer reviewed
Brennan, Robert L.; Lockwood, Robert E. – Applied Psychological Measurement, 1980
Generalizability theory is used to characterize and quantify expected variance in cutting scores and to compare the Nedelsky and Angoff procedures for establishing a cutting score. Results suggest that the restricted nature of the Nedelsky (inferred) probability scale may limit its applicability in certain contexts. (Author/BW)
Descriptors: Cutting Scores, Generalization, Statistical Analysis, Test Reliability
Peer reviewed Peer reviewed
Fleiss, Joseph L.; Cuzick, Jack – Applied Psychological Measurement, 1979
A reliability study is illustrated in which subjects are judged on a dichotomous trait by different sets of judges, possibly unequal in number. A kappa-like measure of reliability is proposed, its correspondence to an intraclass correlation coefficient is pointed out, and a test for its statistical significance is presented. (Author/CTM)
Descriptors: Classification, Correlation, Individual Characteristics, Informal Assessment
Peer reviewed Peer reviewed
Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note shows that, under conditions specified by Levin and Subkoviak (TM 503 420), it is not necessary to specify the reliabilities of observed scores when comparing completely randomized designs with randomized block designs. Certain errors in their illustrative example are also discussed. (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Levin, Joel R.; Subkoviak, Michael J. – Applied Psychological Measurement, 1978
Comments (TM 503 706) on an earlier article (TM 503 420) concerning the comparison of the completely randomized design and the randomized block design are acknowledged and appreciated. In addition, potentially misleading notions arising from these comments are addressed and clarified. (See also TM 503 708). (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note continues the discussion of earlier articles (TM 503 420, TM 503 706, and TM 503 707), comparing the completely randomized design with the randomized block design. (CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Rounds, James B., Jr.; And Others – Applied Psychological Measurement, 1978
Two studies compared multiple rank order and paired comparison methods in terms of psychometric characteristics and user reactions. Individual and group item responses, preference counts, and Thurstone normal transform scale values obtained by the multiple rank order method were found to be similar to those obtained by paired comparisons.…
Descriptors: Higher Education, Measurement, Rating Scales, Response Style (Tests)
Peer reviewed Peer reviewed
Hendel, Darwin D. – Applied Psychological Measurement, 1977
Results of a study to determine whether paired-comparisons i intransitivity is a function of intransitivity associated with specific stimulus objects rather than a function of the entire set of stimulus objects suggested that paired-comparisons intransitivity relates to individual differences variables associated with the respondent. (Author/CTM)
Descriptors: Association Measures, High Schools, Higher Education, Multidimensional Scaling
Peer reviewed Peer reviewed
Ceurvorst, Robert W.; Krus, David J. – Applied Psychological Measurement, 1979
A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)
Descriptors: Analysis of Variance, Information Theory, Least Squares Statistics, Mathematical Models
Peer reviewed Peer reviewed
Wainer, Howard; Thissen, David – Applied Psychological Measurement, 1979
A class of naive estimators of correlation was tested for robustness, accuracy, and efficiency against Pearson's r, Tukey's r, and Spearman's r. It was found that this class of estimators seems to be superior, being less affected by outliers, reasonably efficient, and frequently more easily calculated. (Author/CTM)
Descriptors: Comparative Analysis, Correlation, Goodness of Fit, Nonparametric Statistics
Peer reviewed Peer reviewed
Blackman, Nicole J-M.; Koval, John J. – Applied Psychological Measurement, 1993
Four indexes of agreement between ratings of a person that correct for chance and are interpretable as intraclass correlation coefficients for different analysis of variance models are investigated. Relationships among the estimators are established for finite samples, and the equivalence of these estimators in large samples is demonstrated. (SLD)
Descriptors: Analysis of Variance, Equations (Mathematics), Estimation (Mathematics), Interrater Reliability
Previous Page | Next Page ยป
Pages: 1  |  2