NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Every Student Succeeds Act…3
What Works Clearinghouse Rating
Showing 1 to 15 of 85 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Guy B. deBrun – Journal of Outdoor Recreation, Education, and Leadership, 2025
Discussions of what it means to be an effective outdoor leader are common in outdoor education literature (Martin et al., 2025; Smith, 2021). Research has identified core competencies (Martin et al., 2025), conceptual frameworks (Pomfret et al., 2023), and course curricula/qualifications for effective leadership (Baker & O'Brien, 2019; Seaman…
Descriptors: Outdoor Leadership, Leadership Effectiveness, Evaluation Methods, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Jonas Flodén – British Educational Research Journal, 2025
This study compares how the generative AI (GenAI) large language model (LLM) ChatGPT performs in grading university exams compared to human teachers. Aspects investigated include consistency, large discrepancies and length of answer. Implications for higher education, including the role of teachers and ethics, are also discussed. Three…
Descriptors: College Faculty, Artificial Intelligence, Comparative Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Slaviša Radovic; Niels Seidel – Innovative Higher Education, 2025
The integration of advanced learning analytics and data-mining technology into higher education has brought various opportunities and challenges, particularly in enhancing students' self-regulated learning (SRL) skills. Analyzing developed features for SRL support, it has become evident that SRL support is not a binary concept but rather a…
Descriptors: Scoring Rubrics, Evaluation Methods, Higher Education, Educational Technology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deborah Oluwadele; Yashik Singh; Timothy Adeliyi – Electronic Journal of e-Learning, 2024
Validation is needed for any newly developed model or framework because it requires several real-life applications. The investment made into e-learning in medical education is daunting, as is the expectation for a positive return on investment. The medical education domain requires data-wise implementation of e-learning as the debate continues…
Descriptors: Electronic Learning, Evaluation Methods, Medical Education, Sustainability
Peer reviewed Peer reviewed
Direct linkDirect link
Tingting Li; Kevin Haudek; Joseph Krajcik – Journal of Science Education and Technology, 2025
Scientific modeling is a vital educational practice that helps students apply scientific knowledge to real-world phenomena. Despite advances in AI, challenges in accurately assessing such models persist, primarily due to the complexity of cognitive constructs and data imbalances in educational settings. This study addresses these challenges by…
Descriptors: Artificial Intelligence, Scientific Concepts, Models, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Williamson, Joanna; Child, Simon – Journal of Vocational Education and Training, 2022
School- and college-based vocational and technical qualifications (VTQs) in England are required to award successful candidates a grade rather than simple pass or fail. Ensuring the reliability and validity of these grades is considered vital, particularly in light of the high-stakes purposes for which school assessment results in England are…
Descriptors: Foreign Countries, Vocational Education, Qualifications, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shasha Chen; Shaohui Chi; Zuhao Wang – Journal of Baltic Science Education, 2025
Interdisciplinary thinking is critical for equipping students to apply scientific knowledge and tackle societal challenges across various disciplines, which has been recognized as a key objective of twenty-first century science education. However, research on effective interdisciplinary assessment in secondary school science education is still…
Descriptors: Thinking Skills, Interdisciplinary Approach, Science Instruction, Grade 7
Peer reviewed Peer reviewed
Direct linkDirect link
Rebecca Sickinger; Tineke Brunfaut; John Pill – Language Testing, 2025
Comparative Judgement (CJ) is an evaluation method, typically conducted online, whereby a rank order is constructed, and scores calculated, from judges' pairwise comparisons of performances. CJ has been researched in various educational contexts, though only rarely in English as a Foreign Language (EFL) writing settings, and is generally agreed to…
Descriptors: Writing Evaluation, English (Second Language), Second Language Learning, Second Language Instruction
Areekkuzhiyil, Santhosh – Online Submission, 2019
Assessment is an integral part of any teaching learning process. Assessment practices have a large number of functions to perform in the context of the teaching learning process. Do contemporary assessment practices perform these function is a critical question to be analysed. In this paper, an attempt has been made to analyse the myths and…
Descriptors: Evaluation Methods, Validity, Reliability, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Paquot, Magali; Rubin, Rachel; Vandeweerd, Nathan – Language Learning, 2022
The main objective of this Methods Showcase Article is to show how the technique of adaptive comparative judgment, coupled with a crowdsourcing approach, can offer practical solutions to reliability issues as well as to address the time and cost difficulties associated with a text-based approach to proficiency assessment in L2 research. We…
Descriptors: Comparative Analysis, Decision Making, Language Proficiency, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Claudia Lizette Garay-Rondero; Alvaro Castillo-Paz; Carlos Gijón-Rivera; Gerardo Domínguez-Ramírez; Conrado Rosales-Torres; Alberto Oliart-Ros – Cogent Education, 2024
Higher Education faces challenges in providing learning experiences and cultivating competencies in our modern environment. Academic discourse centres on tutoring automation, a complicated educational dilemma in the age of Machine Learning, Deep Learning, and Artificial Intelligence. Establishing a structured technique for real competency…
Descriptors: Competency Based Education, Performance Based Assessment, Evaluation Methods, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Wheadon, Christopher; Barmby, Patrick; Christodoulou, Daisy; Henderson, Brian – Assessment in Education: Principles, Policy & Practice, 2020
Writing assessment is a key feature of most education systems, yet there are limitations with traditional methods of assessing writing involving rubrics. In contrast, comparative judgement appears to overcome the reliability issues that beset the assessment of performance assessment tasks. The approach presented here extends previous work on…
Descriptors: Foreign Countries, Writing Evaluation, Elementary School Students, Evaluation Methods
National Implementation Research Network, 2024
Professional learning (PL) is recognized widely in the field of education as a critical implementation strategy to support teacher knowledge and skill development. Rivet Education created a Professional Learning Partner Guide (PLPG), a searchable database of learning providers with expertise in the adoption and implementation of High Quality…
Descriptors: Faculty Development, Instructional Materials, Educational Quality, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Dalton, Sarah Grace; Stark, Brielle C.; Fromm, Davida; Apple, Kristen; MacWhinney, Brian; Rensch, Amanda; Rowedder, Madyson – Journal of Speech, Language, and Hearing Research, 2022
Purpose: The aim of this study was to advance the use of structured, monologic discourse analysis by validating an automated scoring procedure for core lexicon (CoreLex) using transcripts. Method: Forty-nine transcripts from persons with aphasia and 48 transcripts from persons with no brain injury were retrieved from the AphasiaBank database. Five…
Descriptors: Validity, Discourse Analysis, Databases, Scoring
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6