NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eser, Mehmet Taha; Aksu, Gökhan – International Journal of Curriculum and Instruction, 2022
The agreement between raters is examined within the scope of the concept of "inter-rater reliability". Although there are clear definitions of the concepts of agreement between raters and reliability between raters, there is no clear information about the conditions under which agreement and reliability level methods are appropriate to…
Descriptors: Generalizability Theory, Interrater Reliability, Evaluation Methods, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Akaeze, Hope O.; Wu, Jamie Heng-Chieh; Lawrence, Frank R.; Weber, Everett P. – Journal of Psychoeducational Assessment, 2023
This paper reports an investigation into the psychometric properties of the COR-Advantage1.5 (COR-Adv1.5) assessment tool, a criterion-referenced observation-based instrument designed to assess the developmental abilities of children from birth through kindergarten. Using data from 8534 children participating in a state-funded preschool program…
Descriptors: Criterion Referenced Tests, Evaluation Methods, Measures (Individuals), Measurement Techniques
May, Henry; Blackman, Horatio; Van Horne, Sam; Tilley, Katherine; Farley-Ripple, Elizabeth N.; Shewchuk, Samantha; Agboh, Darren; Micklos, Deborah Amsden – Center for Research Use in Education, 2022
In this technical report, the Center for Research Use in Education (CRUE) presents the methodological design of a large-scale quantitative investigation of research use by school-based practitioners through the "Survey of Evidence in Education for Schools (SEE-S)." It documents the major technical aspects of the development of SEE-S,…
Descriptors: Surveys, Schools, Educational Research, Research Utilization
Peer reviewed Peer reviewed
Direct linkDirect link
Rantanen, Pekka – Assessment & Evaluation in Higher Education, 2013
A multilevel analysis approach was used to analyse students' evaluation of teaching (SET). The low value of inter-rater reliability stresses that any solid conclusions on teaching cannot be made on the basis of single feedbacks. To assess a teacher's general teaching effectiveness, one needs to evaluate four randomly chosen course implementations.…
Descriptors: Test Reliability, Feedback (Response), Generalizability Theory, Student Evaluation of Teacher Performance
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hathcoat, John D.; Penn, Jeremy D. – Research & Practice in Assessment, 2012
Critics of standardized testing have recommended replacing standardized tests with more authentic assessment measures, such as classroom assignments, projects, or portfolios rated by a panel of raters using common rubrics. Little research has examined the consistency of scores across multiple authentic assignments or the implications of this…
Descriptors: Generalizability Theory, Performance Based Assessment, Writing Across the Curriculum, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Tindal, Gerald; Yovanoff, Paul; Geller, Josh P. – Journal of Special Education, 2010
Students with significant disabilities must participate in large-scale assessments, often using an alternate assessment judged against alternate achievement standards. The development and administration of this type of assessment must necessarily balance meaningful participation with accurate measurement. In this study, generalizability theory is…
Descriptors: Generalizability Theory, Alternative Assessment, Disabilities, Severe Mental Retardation
Way, Walter D.; Murphy, Daniel; Powers, Sonya; Keng, Leslie – Pearson, 2012
Significant momentum exists for next-generation assessments to increasingly utilize technology to develop and deliver performance-based assessments. Many traditional challenges with this assessment approach still apply, including psychometric concerns related to performance-based tasks (PBTs), which include low reliability, efficiency of…
Descriptors: Task Analysis, Performance Based Assessment, Technology Uses in Education, Models
Peer reviewed Peer reviewed
Abedi, Jamal – Multivariate Behavioral Research, 1996
The Interrater/Test Reliability System (ITRS) is described. The ITRS is a comprehensive computer tool used to address questions of interrater reliability that computes several different indices of interrater reliability and the generalizability coefficient over raters and topics. The system is available in IBM compatible or Macintosh format. (SLD)
Descriptors: Computer Software, Computer Software Evaluation, Evaluation Methods, Evaluators
Peer reviewed Peer reviewed
Hux, Karen; And Others – Journal of Communication Disorders, 1997
A study evaluated and compared four methods of assessing reliability on one discourse analysis procedure--a modified version of Damico's Clinical Discourse Analysis. The methods were Pearson product-moment correlations; interobserver agreement; Cohen's kappa; and generalizability coefficients. The strengths and weaknesses of the methods are…
Descriptors: Communication Disorders, Discourse Analysis, Evaluation Methods, Evaluation Problems
Crehan, Kevin D. – 1997
Writing fits well within the realm of outcomes suitable for observation by performance assessments. Studies of the reliability of performance assessments have suggested that interrater reliability can be consistently high. Scoring consistency, however, is only one aspect of quality in decisions based on assessment results. Another is…
Descriptors: Evaluation Methods, Feedback, Generalizability Theory, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Oh, Deborah M.; Kim, Joshua M.; Garcia, Raymond E.; Krilowicz, Beverly L. – Advances in Physiology Education, 2005
There is increasing pressure, both from institutions central to the national scientific mission and from regional and national accrediting agencies, on natural sciences faculty to move beyond course examinations as measures of student performance and to instead develop and use reliable and valid authentic assessment measures for both individual…
Descriptors: Evaluation Methods, Biochemistry, Natural Sciences, Generalizability Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Hafner, John C.; Hafner, Patti M. – International Journal of Science Education, 2003
Although the rubric has emerged as one of the most popular assessment tools in progressive educational programs, there is an unfortunate dearth of information in the literature quantifying the actual effectiveness of the rubric as an assessment tool "in the hands of the students." This study focuses on the validity and reliability of the rubric as…
Descriptors: Interrater Reliability, Generalizability Theory, Biology, Scoring Rubrics
Huang, Chi-yu; And Others – 1995
Generalizability theory is used to examine the sources of variability present in a teacher and course evaluation instrument. Two studies were conducted. In the first study, four different forms commonly used by one specific college of a large midwestern university were examined using responses of 915 students. The analysis of variance performed on…
Descriptors: Analysis of Variance, College Students, Course Evaluation, Evaluation Methods
Warm, Ronnie; And Others – 1986
This document describes the development and assessment of a methodology for generating on-the-job-training (OJT) task proficiency assessment instruments. The Task Evaluation Form (TEF) development procedures were derived to address previously identified deficiencies in the evaluation of OJT task proficiency. The TEF development procedures allow…
Descriptors: Adults, Correlation, Data Collection, Evaluation Methods
Secolsky, Charles, Ed.; Denison, D. Brian, Ed. – Routledge, Taylor & Francis Group, 2011
Increased demands for colleges and universities to engage in outcomes assessment for accountability purposes have accelerated the need to bridge the gap between higher education practice and the fields of measurement, assessment, and evaluation. The "Handbook on Measurement, Assessment, and Evaluation in Higher Education" provides higher…
Descriptors: Generalizability Theory, Higher Education, Institutional Advancement, Teacher Effectiveness
Previous Page | Next Page »
Pages: 1  |  2