NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 76 to 90 of 10,081 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Halima Alnashiri; Mladen Rakovic; Sadia Nawaz; Xinyu Li; Joni Lamsa; Lyn Lim; Maria Bannert; Sanna Jarvela; Dragan Gasevic – Journal of Computer Assisted Learning, 2025
Background: Integrating information from multiple sources is a common yet challenging learning task for secondary school students. Many underuse metacognitive skills, such as monitoring and control, which are essential for promoting engagement and effective learning outcomes. Objective: This study aims to examine the relationship between…
Descriptors: Secondary School Students, Metacognition, Writing (Composition), English
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aneesha Badrinarayan – Learning Policy Institute, 2025
Since the rise of state assessments whose primary function is to yield scores that can be used to compare schools and groups of students, most states have developed their state assessment programs under the assumption that either: (a) state tests are not intended to meaningfully shape instruction, or (b), if they are, the information provided in…
Descriptors: Measurement, Student Evaluation, Evaluation Methods, Relevance (Education)
Peer reviewed Peer reviewed
Direct linkDirect link
Dawn Holford; Janet McLean; Alex O. Holcombe; Iratxe Puebla; Vera Kempe – Active Learning in Higher Education, 2025
Authentic assessment allows students to demonstrate knowledge and skills in real-world tasks. In research, peer review is one such task that researchers learn by doing, as they evaluate other researchers' work. This means peer review could serve as an authentic assessment that engages students' critical thinking skills in a process of active…
Descriptors: Undergraduate Students, Evaluation Methods, Peer Evaluation, Interrater Reliability
Indiana Department of Education, 2020
RISE was designed and revised to provide a quality system, aligned with current legislative requirements that local corporations can adopt in its entirety, or use as a model as they develop evaluation systems to best suit their local contexts. RISE was developed over the course of a year by the Indiana Teacher Evaluation Cabinet, a diverse group…
Descriptors: Teacher Evaluation, Models, Teacher Effectiveness, Summative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Alexandra Jackson; Elise Barrella; Cheryl Bodnar – Journal of Engineering Education, 2024
Background: Concept maps are a valid assessment tool to explore student understanding of diverse topics. Many types of academic programs have integrated concept mapping into their courses, resulting in various activities and scoring methods to understand student perceptions. Purpose: Few prior reviews of concept mapping have addressed their use…
Descriptors: Engineering Education, Concept Mapping, Scoring Rubrics, Evaluation Methods
Tom Bramley; Carmen Vidal Rodeiro; Frances Wilson – Cambridge University Press & Assessment, 2024
Traditionally in England, exam results in General Certificates of Secondary Education (GCSEs) (and before them O levels) and A levels have been reported as letter grades, with A (or A*) as the top grade, then B, C etc. The reforms gave the opportunity to revisit the arguments for different formats of reporting, and Cambridge Assessment contributed…
Descriptors: Foreign Countries, Secondary Schools, Rating Scales, Scoring Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Xin Qiao; Akihito Kamata; Cornelis Potgieter – Grantee Submission, 2024
Oral reading fluency (ORF) assessments are commonly used to screen at-risk readers and evaluate interventions' effectiveness as curriculum-based measurements. Similar to the standard practice in item response theory (IRT), calibrated passage parameter estimates are currently used as if they were population values in model-based ORF scoring.…
Descriptors: Oral Reading, Reading Fluency, Error Patterns, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Mark White; Matt Ronfeldt – Educational Assessment, 2024
Standardized observation systems seek to reliably measure a specific conceptualization of teaching quality, managing rater error through mechanisms such as certification, calibration, validation, and double-scoring. These mechanisms both support high quality scoring and generate the empirical evidence used to support the scoring inference (i.e.,…
Descriptors: Interrater Reliability, Quality Control, Teacher Effectiveness, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Anita L. Campbell; Pragashni Padayachee – IEEE Transactions on Education, 2024
Contribution: This concept article shows how the mathematical competencies research framework (MCRF) can guide the design of rubrics to assess engineering mathematics tasks. Practical guidance is given for engineering mathematics educators wanting to create effective rubrics that support student learning and promote academic success. Background:…
Descriptors: Engineering Education, Mathematics Education, Mathematics Skills, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
William Furman – Journal of the Scholarship of Teaching and Learning, 2024
The rubric, a canonical matrix of criteria presented to students as the road map to academic success. An "Ah-ha" moment, "that is what I'm looking for" utopia for the instructor. While rubrics provide the possibility for solving the complexity of some teaching problems, we have come to know them as a tool that is as useful as…
Descriptors: Instructional Design, Teachers, Scoring Rubrics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Kumar, Vivekanandan S.; Boulanger, David – International Journal of Artificial Intelligence in Education, 2021
This article investigates the feasibility of using automated scoring methods to evaluate the quality of student-written essays. In 2012, Kaggle hosted an Automated Student Assessment Prize contest to find effective solutions to automated testing and grading. This article: a) analyzes the datasets from the contest -- which contained hand-graded…
Descriptors: Automation, Scoring, Essays, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kiziltas, Yusuf; Sata, Mehmet; Elkonca, Fuat – Elementary School Forum (Mimbar Sekolah Dasar), 2023
It is known that the reading performance of disadvantaged students is lower when compared to non-disadvantaged students. It has always been discussed that being disadvantaged affects teachers' bias in scoring students' reading performance. Therefore, the existence and effect of the teacher factor in the low level of reading performance of students…
Descriptors: Elementary School Teachers, Elementary School Students, Teacher Student Relationship, Teacher Attitudes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kocakulah, Aysel – Participatory Educational Research, 2022
The aim of this study is to develop and apply a rubric to evaluate the solutions proposed for questions about electromagnetic induction belonging to university second year pre-service teachers. In this study which has pretest-posttest quasi-experimental design with control group, teaching of the topic of electromagnetic induction was applied to…
Descriptors: Scoring Rubrics, Student Evaluation, Undergraduate Students, Problem Solving
Peer reviewed Peer reviewed
Direct linkDirect link
Firoozi, Tahereh; Mohammadi, Hamid; Gierl, Mark J. – Educational Measurement: Issues and Practice, 2023
Research on Automated Essay Scoring has become increasing important because it serves as a method for evaluating students' written responses at scale. Scalable methods for scoring written responses are needed as students migrate to online learning environments resulting in the need to evaluate large numbers of written-response assessments. The…
Descriptors: Active Learning, Automation, Scoring, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deschênes, Marie-France; Dionne, Éric; Dorion, Michelle; Grondin, Julie – Practical Assessment, Research & Evaluation, 2023
The use of the aggregate scoring method for scoring concordance tests requires the weighting of test items to be derived from the performance of a group of experts who take the test under the same conditions as the examinees. However, the average score of experts constituting the reference panel remains a critical issue in the use of these tests.…
Descriptors: Scoring, Tests, Evaluation Methods, Test Items
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  673