Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Statistical Analysis | 19 |
Reliability | 11 |
Test Reliability | 7 |
Analysis of Variance | 5 |
Error of Measurement | 5 |
Higher Education | 4 |
Sampling | 4 |
Comparative Analysis | 3 |
Hypothesis Testing | 3 |
Rating Scales | 3 |
Research Design | 3 |
More ▼ |
Source
Applied Psychological… | 19 |
Author
Publication Type
Journal Articles | 12 |
Reports - Evaluative | 6 |
Reports - Research | 4 |
Collected Works - Serials | 1 |
Guides - Non-Classroom | 1 |
Education Level
Audience
Location
Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Eysenck Personality Inventory | 1 |
Graduate Record Examinations | 1 |
Minnesota Importance… | 1 |
What Works Clearinghouse Rating
Beauducel, Andre – Applied Psychological Measurement, 2013
The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…
Descriptors: Factor Analysis, Predictor Variables, Reliability, Error of Measurement
Waller, Niels G. – Applied Psychological Measurement, 2008
Reliability is a property of test scores from individuals who have been sampled from a well-defined population. Reliability indices, such as coefficient and related formulas for internal consistency reliability (KR-20, Hoyt's reliability), yield lower bound reliability estimates when (a) subjects have been sampled from a single population and when…
Descriptors: Test Items, Reliability, Scores, Psychometrics
Lucke, Joseph F. – Applied Psychological Measurement, 2005
The properties of internal consistency (alpha), classical reliability (rho), and congeneric reliability (omega) for a composite test with correlated item error are analytically investigated. Possible sources of correlated item error are contextual effects, item bundles, and item models that ignore additional attributes or higher-order attributes.…
Descriptors: Reliability, Statistical Analysis

Alsawalmeh, Yousef M.; Feldt, Leonard S. – Applied Psychological Measurement, 1994
An approximate statistical test of the equality of two intraclass reliability coefficients based on the same sample of people is derived. Such a test is needed when a researcher wishes to compare the reliability of two measurement procedures, and both procedures can be applied to results from the same group. (SLD)
Descriptors: Comparative Analysis, Measurement Techniques, Reliability, Sampling

Millsap, Roger E. – Applied Psychological Measurement, 1988
Two new methods for constructing a credibility interval (CI)--an interval containing a specified proportion of true validity description--are discussed, from a frequentist perspective. Tolerance intervals, unlike the current method of constructing the CI, have performance characteristics across repeated applications and may be useful in validity…
Descriptors: Bayesian Statistics, Meta Analysis, Statistical Analysis, Test Reliability

Brennan, Robert L.; Lockwood, Robert E. – Applied Psychological Measurement, 1980
Generalizability theory is used to characterize and quantify expected variance in cutting scores and to compare the Nedelsky and Angoff procedures for establishing a cutting score. Results suggest that the restricted nature of the Nedelsky (inferred) probability scale may limit its applicability in certain contexts. (Author/BW)
Descriptors: Cutting Scores, Generalization, Statistical Analysis, Test Reliability

Fleiss, Joseph L.; Cuzick, Jack – Applied Psychological Measurement, 1979
A reliability study is illustrated in which subjects are judged on a dichotomous trait by different sets of judges, possibly unequal in number. A kappa-like measure of reliability is proposed, its correspondence to an intraclass correlation coefficient is pointed out, and a test for its statistical significance is presented. (Author/CTM)
Descriptors: Classification, Correlation, Individual Characteristics, Informal Assessment

Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note shows that, under conditions specified by Levin and Subkoviak (TM 503 420), it is not necessary to specify the reliabilities of observed scores when comparing completely randomized designs with randomized block designs. Certain errors in their illustrative example are also discussed. (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability

Levin, Joel R.; Subkoviak, Michael J. – Applied Psychological Measurement, 1978
Comments (TM 503 706) on an earlier article (TM 503 420) concerning the comparison of the completely randomized design and the randomized block design are acknowledged and appreciated. In addition, potentially misleading notions arising from these comments are addressed and clarified. (See also TM 503 708). (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability

Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note continues the discussion of earlier articles (TM 503 420, TM 503 706, and TM 503 707), comparing the completely randomized design with the randomized block design. (CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability

Rounds, James B., Jr.; And Others – Applied Psychological Measurement, 1978
Two studies compared multiple rank order and paired comparison methods in terms of psychometric characteristics and user reactions. Individual and group item responses, preference counts, and Thurstone normal transform scale values obtained by the multiple rank order method were found to be similar to those obtained by paired comparisons.…
Descriptors: Higher Education, Measurement, Rating Scales, Response Style (Tests)

Hendel, Darwin D. – Applied Psychological Measurement, 1977
Results of a study to determine whether paired-comparisons i intransitivity is a function of intransitivity associated with specific stimulus objects rather than a function of the entire set of stimulus objects suggested that paired-comparisons intransitivity relates to individual differences variables associated with the respondent. (Author/CTM)
Descriptors: Association Measures, High Schools, Higher Education, Multidimensional Scaling

Ceurvorst, Robert W.; Krus, David J. – Applied Psychological Measurement, 1979
A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)
Descriptors: Analysis of Variance, Information Theory, Least Squares Statistics, Mathematical Models

Wainer, Howard; Thissen, David – Applied Psychological Measurement, 1979
A class of naive estimators of correlation was tested for robustness, accuracy, and efficiency against Pearson's r, Tukey's r, and Spearman's r. It was found that this class of estimators seems to be superior, being less affected by outliers, reasonably efficient, and frequently more easily calculated. (Author/CTM)
Descriptors: Comparative Analysis, Correlation, Goodness of Fit, Nonparametric Statistics

Blackman, Nicole J-M.; Koval, John J. – Applied Psychological Measurement, 1993
Four indexes of agreement between ratings of a person that correct for chance and are interpretable as intraclass correlation coefficients for different analysis of variance models are investigated. Relationships among the estimators are established for finite samples, and the equivalence of these estimators in large samples is demonstrated. (SLD)
Descriptors: Analysis of Variance, Equations (Mathematics), Estimation (Mathematics), Interrater Reliability
Previous Page | Next Page ยป
Pages: 1 | 2