NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)9
Audience
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations1
What Works Clearinghouse Rating
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Biasutti, Michele – Technology, Pedagogy and Education, 2017
The current study describes the development of a content analysis coding scheme to examine transcripts of online asynchronous discussion groups in higher education. The theoretical framework comprises the theories regarding knowledge construction in computer-supported collaborative learning (CSCL) based on a sociocultural perspective. The coding…
Descriptors: Asynchronous Communication, Computer Mediated Communication, Content Analysis, Coding
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Richer, Amanda; Charmaraman, Linda; Ceder, Ineke – Afterschool Matters, 2018
Like instruments used in afterschool programs to assess children's social and emotional growth or to evaluate staff members' performance, instruments used to evaluate program quality should be free from bias. Practitioners and researchers alike want to know that assessment instruments, whatever their type or intent, treat all people fairly and do…
Descriptors: Cultural Differences, Social Bias, Interrater Reliability, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Gargani, John; Strong, Michael – Journal of Teacher Education, 2015
In Gargani and Strong (2014), we describe The Rapid Assessment of Teacher Effectiveness (RATE), a new teacher evaluation instrument. Our account of the validation research associated with RATE inspired a review by Good and Lavigne (2015). Here, we reply to the main points of their review. We elaborate on the validity, reliability, theoretical…
Descriptors: Evidence, Teacher Effectiveness, Teacher Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Krukowski, Rebecca A.; Philyaw Perez, Amanda G.; Bursac, Zoran; Goodell, Melanie; Raczynski, James M.; Smith West, Delia; Phillips, Martha M. – Journal of School Health, 2011
Background: Foods provided in schools represent a substantial portion of US children's dietary intake; however, the school food environment has proven difficult to describe due to the lack of comprehensive, standardized, and validated measures. Methods: As part of the Arkansas Act 1220 evaluation project, we developed the School Cafeteria…
Descriptors: Health Promotion, Nutrition, Public Health, Interrater Reliability
Buelin-Biesecker, Jennifer Katherine – ProQuest LLC, 2012
This study compared the creative outcomes in student work resulting from two pedagogical approaches to creative problem solving activities. A secondary goal was to validate the Consensual Assessment Technique (CAT) as a means of assessing creativity. Linear models for problem solving and design processes serve as the current paradigm in classroom…
Descriptors: Technology Education, Creativity, Problem Solving, Teaching Methods
Porter, Jennifer Marie – ProQuest LLC, 2010
This research evaluated the inter-rater reliability of the Performance Assessment for California Teachers (PACT). Multiple methods for estimating overall rater consistency include percent agreement and Cohen's Kappa (1960), which yielded discrepancies between rater agreement in terms of whether candidates passed or failed particular PACT rubrics.…
Descriptors: Interrater Reliability, Program Effectiveness, Scoring Rubrics, Item Analysis
Hardison, Chaitra M.; Vilamovska, Anna-Marie – RAND Corporation, 2009
The Collegiate Learning Assessment (CLA) is a measure of how much students' critical thinking improves after attending college or university. This report illustrates how institutions can set their own standards on the CLA using a method that is appropriate for the CLA's unique characteristics. The authors examined evidence of reliability and…
Descriptors: Standard Setting, Evaluation Methods, Research Reports, Critical Thinking
Flanders, Anne K.; Wick, John – 1998
This paper examines whether the peer-review process of the North Central Association (NCA) is reliable and valid. Reliance on peer judgments has been a part of NCA accreditation, but confidence in the use of peer decisions to certify a school's readiness to implement the improvement plan--Outcomes Accreditation (OA)--was weak. The study focused on…
Descriptors: Accreditation (Institutions), Educational Assessment, Educational Improvement, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Stapleton, Paul; Helms-Park, Rena – English for Specific Purposes, 2006
This paper introduces the Website Acceptability Tiered Checklist (WATCH), a preliminary version of a multi-trait scale that could be used by instructors and students to assess the quality of websites chosen as source materials in students' research papers in a Humanities program. The scale includes bands for assessing: (i) the authority and…
Descriptors: Information Sources, Web Sites, English for Academic Purposes, Check Lists
Peer reviewed Peer reviewed
Direct linkDirect link
Shay, Suellen Butler – Harvard Educational Review, 2004
Based on her study of the assessment and validation of final year projects in two academic departments--one located in a humanities faculty and the other in an engineering faculty of a South African university--Suellen Shay argues that the assessment of complex tasks is a socially situated interpretive act. Her argument centers on three questions.…
Descriptors: Performance Based Assessment, Student Projects, Evaluation Methods, Interpersonal Relationship