NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers5
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eser, Mehmet Taha; Aksu, Gökhan – International Journal of Curriculum and Instruction, 2022
The agreement between raters is examined within the scope of the concept of "inter-rater reliability". Although there are clear definitions of the concepts of agreement between raters and reliability between raters, there is no clear information about the conditions under which agreement and reliability level methods are appropriate to…
Descriptors: Generalizability Theory, Interrater Reliability, Evaluation Methods, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Weston, Timothy J.; Hayward, Charles N.; Laursen, Sandra L. – American Journal of Evaluation, 2021
Observations are widely used in research and evaluation to characterize teaching and learning activities. Because conducting observations is typically resource intensive, it is important that inferences from observation data are made confidently. While attention focuses on interrater reliability, the reliability of a single-class measure over the…
Descriptors: Generalizability Theory, Observation, Inferences, Social Science Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McCaffrey, Daniel F.; Oliveri, Maria Elena; Holtzman, Steven – ETS Research Report Series, 2018
Scores from noncognitive measures are increasingly valued for their utility in helping to inform postsecondary admissions decisions. However, their use has presented challenges because of faking, response biases, or subjectivity, which standardized third-party evaluations (TPEs) can help minimize. Analysts and researchers using TPEs, however, need…
Descriptors: Generalizability Theory, Scores, College Admission, Admission Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Irby, Sarah M.; Floyd, Randy G. – Psychology in the Schools, 2017
This study examined the exchangeability of total scores (i.e., intelligent quotients [IQs]) from three brief intelligence tests. Tests were administered to 36 children with intellectual giftedness, scored live by one set of primary examiners and later scored by a secondary examiner. For each student, six IQs were calculated, and all 216 values…
Descriptors: Intelligence Tests, Gifted, Error of Measurement, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Jensen, Bryant; Grajeda, Sara; Haertel, Edward – Educational Assessment, 2018
We trace the development and analyze the generalizability of the Classroom Assessment of Sociocultural Interactions (CASI), an observation system designed to measure cultural dimensions of classroom interactions. We establish CASI measurement properties by analyzing panoramic videos of 4th and 5th grade classrooms from the Measures of Effective…
Descriptors: Classroom Observation Techniques, Grade 4, Grade 5, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Chih-Kai – Language Testing, 2017
Sparse-rated data are common in operational performance-based language tests, as an inevitable result of assigning examinee responses to a fraction of available raters. The current study investigates the precision of two generalizability-theory methods (i.e., the rating method and the subdividing method) specifically designed to accommodate the…
Descriptors: Data Analysis, Language Tests, Generalizability Theory, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Chao – Language Assessment Quarterly, 2016
As a property of test scores, reliability/dependability constitutes an important psychometric consideration, and it underpins the validity of measurement results. A review of interpreter certification performance tests (ICPTs) reveals that (a) although reliability/dependability checking has been recognized as an important concern, its theoretical…
Descriptors: Foreign Countries, Scores, English, Chinese
Alkahtani, Saif F. – ProQuest LLC, 2012
The principal aim of the present study was to better guide the Quranic recitation appraisal practice by presenting an application of Generalizability theory and Many-facet Rasch Measurement Model for assessing the dependability and fit of two suggested rubrics. Recitations of 93 students were rated holistically and analytically by 3 independent…
Descriptors: Generalizability Theory, Item Response Theory, Verbal Tests, Islam
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hathcoat, John D.; Penn, Jeremy D. – Research & Practice in Assessment, 2012
Critics of standardized testing have recommended replacing standardized tests with more authentic assessment measures, such as classroom assignments, projects, or portfolios rated by a panel of raters using common rubrics. Little research has examined the consistency of scores across multiple authentic assignments or the implications of this…
Descriptors: Generalizability Theory, Performance Based Assessment, Writing Across the Curriculum, Standardized Tests
Peer reviewed Peer reviewed
VanLeeuwen, Dawn M. – Journal of Agricultural Education, 1997
Generalizability Theory can be used to assess reliability in the presence of multiple sources and different types of error. It provides a flexible alternative to Classical Theory and can handle estimation of interrater reliability with any number of raters. (SK)
Descriptors: Error of Measurement, Generalizability Theory, Interrater Reliability, Measurement Techniques
Chiu, Christopher W. T. – 2000
A procedure was developed to analyze data with missing observations by extracting data from a sparsely filled data matrix into analyzable smaller subsets of data. This subdividing method, based on the conceptual framework of meta-analysis, was accomplished by creating data sets that exhibit structural designs and then pooling variance components…
Descriptors: Difficulty Level, Error of Measurement, Generalizability Theory, Interrater Reliability
Peer reviewed Peer reviewed
Goodwin, Laura D.; Goodwin, William L. – Journal of Early Intervention, 1991
Four approaches to estimating interrater reliability in early childhood special education research are illustrated and compared: correlation, comparison of means, percentage of agreement, and generalizability theory techniques. Generalizability theory techniques are proposed as a method for estimating the amount of variance attributable to…
Descriptors: Analysis of Variance, Disabilities, Early Childhood Education, Educational Research
Peer reviewed Peer reviewed
Meskauskas, John A. – Evaluation and the Health Professions, 1986
Two new indices of stability of content-referenced standard-setting results are presented, relating variability of judges' decisions to the variability of candidate scores and to the reliability of the test. These indices are used to indicate whether scores resulting from a standard-setting study are of sufficient precision. (Author/LMO)
Descriptors: Certification, Credentials, Error of Measurement, Generalizability Theory
Rowley, Glenn L. – 1986
Classroom researchers are frequently urged to provide evidence of the reliability of their data. In the case of observational data, three approaches to this have emerged: observer agreement, generalizability theory, and measurement error. Generalizability theory provides the most powerful approach given an adequate data collection design, but…
Descriptors: Classroom Observation Techniques, Classroom Research, Correlation, Elementary Education
Naizer, Gilbert – 1992
A measurement approach called generalizability theory (G-theory) is an important alternative to the more familiar classical measurement theory that yields less useful coefficients such as alpha or the KR-20 coefficient. G-theory is a theory about the dependability of behavioral measurements that allows the simultaneous estimation of multiple…
Descriptors: Error of Measurement, Estimation (Mathematics), Generalizability Theory, Higher Education
Previous Page | Next Page »
Pages: 1  |  2