NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Conger, Anthony J. – Educational and Psychological Measurement, 2017
Drawing parallels to classical test theory, this article clarifies the difference between rater accuracy and reliability and demonstrates how category marginal frequencies affect rater agreement and Cohen's kappa. Category assignment paradigms are developed: comparing raters to a standard (index) versus comparing two raters to one another…
Descriptors: Interrater Reliability, Evaluators, Accuracy, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Aryadoust, Vahid – Educational Psychology, 2016
This study sought to examine the development of paragraph writing skills of 116 English as a second language university students over the course of 12 weeks and the relationship between the linguistic features of students' written texts as measured by Coh-Metrix--a computational system for estimating textual features such as cohesion and…
Descriptors: English (Second Language), Second Language Learning, Writing Skills, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Dekle, Dawn J.; Leung, Denis H. Y.; Zhu, Min – Psychological Methods, 2008
Across many areas of psychology, concordance is commonly used to measure the (intragroup) agreement in ranking a number of items by a group of judges. Sometimes, however, the judges come from multiple groups, and in those situations, the interest is to measure the concordance between groups, under the assumption that there is some within-group…
Descriptors: Item Response Theory, Statistical Analysis, Psychological Studies, Evaluators