Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Reliability | 10 |
Sampling | 9 |
Statistical Analysis | 5 |
Hypothesis Testing | 4 |
Analysis of Variance | 3 |
Error of Measurement | 3 |
Measurement Techniques | 3 |
Research Design | 3 |
Test Items | 3 |
Bias | 2 |
Mathematical Models | 2 |
More ▼ |
Source
Applied Psychological… | 10 |
Author
Publication Type
Journal Articles | 6 |
Reports - Evaluative | 4 |
Reports - Descriptive | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Classification Consistency and Accuracy for Complex Assessments under the Compound Multinomial Model
Lee, Won-Chan; Brennan, Robert L.; Wan, Lei – Applied Psychological Measurement, 2009
For a test that consists of dichotomously scored items, several approaches have been reported in the literature for estimating classification consistency and accuracy indices based on a single administration of a test. Classification consistency and accuracy have not been studied much, however, for "complex" assessments--for example,…
Descriptors: Classification, Reliability, Test Items, Scoring
Waller, Niels G. – Applied Psychological Measurement, 2008
Reliability is a property of test scores from individuals who have been sampled from a well-defined population. Reliability indices, such as coefficient and related formulas for internal consistency reliability (KR-20, Hoyt's reliability), yield lower bound reliability estimates when (a) subjects have been sampled from a single population and when…
Descriptors: Test Items, Reliability, Scores, Psychometrics

Raju, Nambury S.; Brand, Paul A. – Applied Psychological Measurement, 2003
Proposed a new asymptotic formula for estimating the sampling variance of a correlation coefficient corrected for unreliability and range restriction. A Monte Carlo simulation study of the new formula results in several positive conclusions about the new approach. (SLD)
Descriptors: Correlation, Monte Carlo Methods, Reliability, Sampling

Cicchetti, Domenic V.; Fleiss, Joseph L. – Applied Psychological Measurement, 1977
The weighted kappa coefficient is a measure of interrater agreement when the relative seriousness of each possible disagreement can be quantified. This monte carlo study demonstrates the utility of the kappa coefficient for ordinal data. Sample size is also briefly discussed. (Author/JKS)
Descriptors: Mathematical Models, Rating Scales, Reliability, Sampling

Alsawalmeh, Yousef M.; Feldt, Leonard S. – Applied Psychological Measurement, 1994
An approximate statistical test of the equality of two intraclass reliability coefficients based on the same sample of people is derived. Such a test is needed when a researcher wishes to compare the reliability of two measurement procedures, and both procedures can be applied to results from the same group. (SLD)
Descriptors: Comparative Analysis, Measurement Techniques, Reliability, Sampling

Alsawalmeh, Yousef M.; Feldt, Leonard S. – Applied Psychological Measurement, 1992
An approximate statistical test is derived for the hypothesis that the intraclass reliability coefficients associated with two measurement procedures are equal. Control of Type 1 error is investigated by comparing empirical sampling distributions of the test statistic with its derived theoretical distribution. A numerical illustration is…
Descriptors: Equations (Mathematics), Hypothesis Testing, Mathematical Models, Measurement Techniques

Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note shows that, under conditions specified by Levin and Subkoviak (TM 503 420), it is not necessary to specify the reliabilities of observed scores when comparing completely randomized designs with randomized block designs. Certain errors in their illustrative example are also discussed. (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability

Levin, Joel R.; Subkoviak, Michael J. – Applied Psychological Measurement, 1978
Comments (TM 503 706) on an earlier article (TM 503 420) concerning the comparison of the completely randomized design and the randomized block design are acknowledged and appreciated. In addition, potentially misleading notions arising from these comments are addressed and clarified. (See also TM 503 708). (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability

Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note continues the discussion of earlier articles (TM 503 420, TM 503 706, and TM 503 707), comparing the completely randomized design with the randomized block design. (CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability

Meijer, Rob R.; And Others – Applied Psychological Measurement, 1995
Three methods based on the nonparametric item response theory (IRT) of R. J. Mokken for the estimation of the reliability of single dichotomous test items are discussed. Analytical and Monte Carlo studies show that one method, designated "MS," is superior because of smaller bias and smaller sampling variance. (SLD)
Descriptors: Estimation (Mathematics), Item Response Theory, Monte Carlo Methods, Nonparametric Statistics