NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Brennan, Robert L.; Prediger, Dale J. – Educational and Psychological Measurement, 1981
This paper considers some appropriate and inappropriate uses of coefficient kappa and alternative kappa-like statistics. Discussion is restricted to the descriptive characteristics of these statistics for measuring agreement with categorical data in studies of reliability and validity. (Author)
Descriptors: Classification, Error of Measurement, Mathematical Models, Test Reliability
Peer reviewed Peer reviewed
Whitney, Douglas R.; And Others – Educational and Psychological Measurement, 1986
This paper summarizes much of the available information concerning the reliability and validity of the Tests of General Educational Development (GED Tests). The data suggest that the results are sufficiently reliable for continued use and that the validity evidence generally supports the intended uses of the tests. (Author/LMO)
Descriptors: Correlation, Equivalency Tests, Error of Measurement, Predictive Validity
Peer reviewed Peer reviewed
Gross, Alan L.; Kagen, Edward – Educational and Psychological Measurement, 1983
This paper compares an uncorrected with a corrected correlation between a selection test and a test-criterion in terms of expected mean square error (EMSE). It presents evidence that although the uncorrected may be more biased than the corrected correlation, it may have a smaller EMSE value, especially in small samples. (Author/PN)
Descriptors: Competitive Selection, Correlation, Error of Measurement, Research Methodology
Peer reviewed Peer reviewed
Modjeski, Richard B.; Michael, William B. – Educational and Psychological Measurement, 1983
Two tests of critical thinking (the Cornell Critical Thinking Test and the Watson-Glaser Critical Thinking Appraisal) were evaluated by a panel of psychologists relative to the validity, reliability, and error of measurement standards stated in the "Standards for Educational and Psychological Tests," 1974. (PN)
Descriptors: Cognitive Tests, Critical Thinking, Error of Measurement, Evaluation Criteria
Peer reviewed Peer reviewed
Thompson, Bruce; Borrello, Gloria M. – Educational and Psychological Measurement, 1992
The utility of combining confirmatory factor analysis and second-order methods is illustrated in a study of responses of 487 undergraduate and graduate students to the love instrument of C. Hendrick and S. Hendrick. Second-order confirmatory methods allow the researcher to explore complex realities more thoroughly. (SLD)
Descriptors: Affective Measures, College Students, Error of Measurement, Heuristics
Peer reviewed Peer reviewed
Stokes, Elizabeth H.; And Others – Educational and Psychological Measurement, 1978
The Wechsler Intelligence Scale for Children, and the revised form of that measure, were administered to a sample of sixth grade pupils. Although the correlation between measures was high, scores on the revised form were significantly lower. (JKS)
Descriptors: Comparative Testing, Correlation, Error of Measurement, Grade 6
Peer reviewed Peer reviewed
Michael, William B.; And Others – Educational and Psychological Measurement, 1978
For each of two revised forms of the Dimensions of Self-Concept measure (intermediate and secondary forms), statistical information is presented concerning the intercorrelations of each of five factor scales, the reliability and standard error of measurement of each scale, and the results of item analyses. (Author/JKS)
Descriptors: Academic Achievement, Elementary Secondary Education, Error of Measurement, Factor Analysis
Peer reviewed Peer reviewed
Bannister, Brendan D.; And Others – Educational and Psychological Measurement, 1987
To control for response bias in student ratings of college teachers, an index of rater error was used that was theoretically independent of actual performance. Partialing out the effects of this extraneous response bias enhanced validity, but partialing out overall effectiveness resulted in reduced convergent and discriminant validities.…
Descriptors: Error of Measurement, Higher Education, Interrater Reliability, Response Style (Tests)