NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
Program for International…1
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mosquera, Jose Miguel Llanos; Suarez, Carlos Giovanny Hidalgo; Guerrero, Victor Andres Bucheli – Education and Information Technologies, 2023
This paper proposes to evaluate learning efficiency by implementing the flipped classroom and automatic source code evaluation based on the Kirkpatrick evaluation model in students of CS1 programming course. The experimentation was conducted with 82 students from two CS1 courses; an experimental group (EG = 56) and a control group (CG = 26). Each…
Descriptors: Flipped Classroom, Coding, Programming, Evaluation Methods
Ashley McLain Westmoreland – ProQuest LLC, 2024
The purpose of this study was to conduct a program evaluation case study on the impact of standards-based grading in a North Carolina school district. The implementation procedures and end products of grading reform were assessed using the CIPP model. Data from teacher questionnaires, principal implementation checklists, interviews, and…
Descriptors: Program Evaluation, Case Studies, Academic Standards, Grading
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baral, Sami; Botelho, Anthony F.; Erickson, John A.; Benachamardi, Priyanka; Heffernan, Neil T. – International Educational Data Mining Society, 2021
Open-ended questions in mathematics are commonly used by teachers to monitor and assess students' deeper conceptual understanding of content. Student answers to these types of questions often exhibit a combination of language, drawn diagrams and tables, and mathematical formulas and expressions that supply teachers with insight into the processes…
Descriptors: Scoring, Automation, Mathematics Tests, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Morton, Jason K.; Northcote, Maria; Kilgour, Peter; Jackson, Wendy A. – Journal of University Teaching and Learning Practice, 2021
Traditionally, rubrics were used simply as grading tools to provide marking frameworks that were transparent to students. More recently, rubrics have been promoted as educational tools to inform students of good practice with the assumption that they engage with these rubrics to guide their learning. However, some tensions arise from this…
Descriptors: Scoring Rubrics, Grading, Student Evaluation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Katia Ciampa; Zora Wolfe; Meagan Hensley – Technology, Pedagogy and Education, 2025
This study explores the role of artificial intelligence (AI) in K-12 student assessment practices, focusing on educators' use of AI tools. Through content analysis of active Facebook groups dedicated to AI in education, the authors examined how educators integrate AI into assessment across various grade levels and subjects. Using the Technology…
Descriptors: Artificial Intelligence, Computer Software, Technology Integration, Kindergarten
Peer reviewed Peer reviewed
Direct linkDirect link
To, Jessica; Panadero, Ernesto; Carless, David – Assessment & Evaluation in Higher Education, 2022
The analysis of exemplars of different quality is a potentially powerful tool in enabling students to understand assessment expectations and appreciate academic standards. Through a systematic review methodology, this paper synthesises exemplar-based research designs, exemplar implementation and the educational effects of exemplars. The review of…
Descriptors: Research Design, Scoring Rubrics, Peer Evaluation, Self Evaluation (Individuals)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Matt Townsley – Journal of Research Initiatives, 2019
The purpose of this paper is to provide a model for educational leadership faculty who aspire to walk the talk of effective feedback by embedding standards-based grading (SBG) in their courses. Rather than focusing on learning, points are the currency of K-12 classrooms across the country. Over 100 years of grading research suggests typical…
Descriptors: Standards, Grading, Instructional Leadership, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
De Marsico, Maria; Sciarrone, Filippo; Sterbini, Andrea; Temperini, Marco – EURASIA Journal of Mathematics, Science & Technology Education, 2017
We show an approach to semi-automatic grading of answers given by students to open ended questions (open answers). We use both peer-evaluation and teacher evaluation. A learner is modeled by her Knowledge and her assessments quality (Judgment). The data generated by the peer- and teacher-evaluations, and by the learner models is represented by a…
Descriptors: Evaluation Methods, Peer Evaluation, Models, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Hooper, Jay; Cowell, Ryan – Educational Assessment, 2014
There has been much research and discussion on the principles of standards-based grading, and there is a growing consensus of best practice. Even so, the actual process of implementing standards-based grading at a school or district level can be a significant challenge. There are very practical questions that remain unclear, such as how the grades…
Descriptors: True Scores, Grading, Academic Standards, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Andjelic, Svetlana; Cekerevac, Zoran – Education and Information Technologies, 2014
This article presents the original model of the computer adaptive testing and grade formation, based on scientifically recognized theories. The base of the model is a personalized algorithm for selection of questions depending on the accuracy of the answer to the previous question. The test is divided into three basic levels of difficulty, and the…
Descriptors: Computer Assisted Testing, Educational Technology, Grades (Scholastic), Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
McDonnell, Jane; Curtis, Will – Assessment & Evaluation in Higher Education, 2014
This paper reports on an action research project into the development of a "democratic feedback model" with students on an education studies programme at a post-1992 university in the UK. Building on work that has explored the dialogic dimensions of assessment and feedback, the research explored the potential for more…
Descriptors: Action Research, Feedback (Response), Higher Education, Democracy
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Martin; Mehta, Sanjana; Rushton, Nicky – Pedagogies: An International Journal, 2015
A new "controlled assessment" model was introduced in England, Wales and Northern Ireland in 2009. This paper describes a research project that explores teachers' experiences of this new assessment model in General Certificate of Secondary Education (GCSE) level modern foreign language (MFL) speaking assessment. Focusing on teachers'…
Descriptors: Foreign Countries, Models, Evaluation Methods, Secondary School Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Azevedo, Ana, Ed.; Azevedo, José, Ed. – IGI Global, 2019
E-assessments of students profoundly influence their motivation and play a key role in the educational process. Adapting assessment techniques to current technological advancements allows for effective pedagogical practices, learning processes, and student engagement. The "Handbook of Research on E-Assessment in Higher Education"…
Descriptors: Higher Education, Computer Assisted Testing, Multiple Choice Tests, Guides
Peer reviewed Peer reviewed
Direct linkDirect link
Pollio, Marty; Hochbein, Craig – Teachers College Record, 2015
Background/Context: From two decades of research on the grading practices of teachers in secondary schools, researchers discovered that teachers evaluated students on numerous factors that do not validly assess a student's achievement level in a specific content area. These consistent findings suggested that traditional grading practices evolved…
Descriptors: Standardized Tests, Academic Standards, Grading, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Applied Psychological Measurement, 2011
Recently, Attali and Powers investigated the usefulness of providing immediate feedback on the correctness of answers to constructed response questions and the opportunity to revise incorrect answers. This article introduces an item response theory (IRT) model for scoring revised responses to questions when several attempts are allowed. The model…
Descriptors: Feedback (Response), Item Response Theory, Models, Error Correction
Previous Page | Next Page »
Pages: 1  |  2