NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 736 to 750 of 10,088 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Avargil, Shirly; Saxena, Arunika – Research in Science Education, 2023
Drawing, constructing, and explaining a model of a given abstract phenomenon is a challenging task. In this study, students were engaged in the "Project-Based Inquiry Science (PBIS)-Air Quality" learning unit, as part of their chemistry curriculum. The study aim is to determine how well students understand chemistry conceptually after…
Descriptors: Science Instruction, Chemistry, Pollution, Scoring Rubrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Neni Hasnunidah; Windayani; Dina Maulina – Journal of Biological Education Indonesia (Jurnal Pendidikan Biologi Indonesia), 2023
This study aimed to determine differences in students' argumentation abilities on the subject matter of cell structure and function through a scientific approach in high schools with different accreditation ratings. The research design used is "ex post facto." The sampling technique used was purposive sampling, with a total sample of 111…
Descriptors: Persuasive Discourse, High School Students, Student Attitudes, Science Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kornwipa Poonpon; Paiboon Manorom; Wirapong Chansanam – Contemporary Educational Technology, 2023
Automated essay scoring (AES) has become a valuable tool in educational settings, providing efficient and objective evaluations of student essays. However, the majority of AES systems have primarily focused on native English speakers, leaving a critical gap in the evaluation of non-native speakers' writing skills. This research addresses this gap…
Descriptors: Automation, Essays, Scoring, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Salaheddin J. Juneidi – International Society for Technology, Education, and Science, 2023
Assessment is not an end in itself but a vehicle for educational improvement. Assessment is vital to the educational process as it enhances teaching and learning, promotes accountability, motivates students, guides instructional decisions, and drives systemic improvements. Assessment plays a crucial role in the educational process as it serves…
Descriptors: Engineering Education, Feedback (Response), Scoring Rubrics, Value Added Models
Tamal Krishna Kayal – Sage Research Methods Cases, 2023
Performance in primary education can be measured in terms of different variables, like enrolment rate, attendance rate, or levels of learning achievement of students. Thus, for an analysis of the comparative performance of units, like country, state, or district, we need an aggregate measure or an index developed from these variables. In…
Descriptors: Foreign Countries, Elementary Education, Measurement Techniques, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Kaplan, Sandra N. – Gifted Child Today, 2019
This column examines the role of rubrics in evaluating gifted students' performance as a part of examining issues surrounding the overall evaluation of gifted programs. The author examines how rubrics can be responsive to the group of gifted students and still be cognizant of the individual gifted learner who has specific talents, potential, and…
Descriptors: Role, Scoring Rubrics, Academically Gifted, Gifted Education
Peer reviewed Peer reviewed
Direct linkDirect link
Lane, Suzanne – Journal of Educational Measurement, 2019
Rater-mediated assessments require the evaluation of the accuracy and consistency of the inferences made by the raters to ensure the validity of score interpretations and uses. Modeling rater response processes allows for a better understanding of how raters map their representations of the examinee performance to their representation of the…
Descriptors: Responses, Accuracy, Validity, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Hopster-den Otter, Dorien; Wools, Saskia; Eggen, Theo J. H. M.; Veldkamp, Bernard P. – Journal of Educational Measurement, 2019
In educational practice, test results are used for several purposes. However, validity research is especially focused on the validity of summative assessment. This article aimed to provide a general framework for validating formative assessment. The authors applied the argument-based approach to validation to the context of formative assessment.…
Descriptors: Formative Evaluation, Test Validity, Scores, Inferences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chowdhury, Faieza – International Education Studies, 2019
Teaching is filled with spirited debate about the best practices for improving students' learning and performance. Today, educators from different parts of the world are supporting the use of rubrics as an instructional tool and highlighting the enormous contributions that rubrics can make in the teaching-learning paradigm. A rubric is a useful…
Descriptors: Scoring Rubrics, Student Evaluation, Feedback (Response), Grading
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sasithorn Limgomolvilas; Patsawut Sukserm – LEARN Journal: Language Education and Acquisition Research Network, 2025
The assessment of English speaking in EFL environments can be inherently subjective and influenced by various factors beyond linguistic ability, including choice of assessment criteria, and even the rubric type. In classroom assessment, the type of rubric recommended for English speaking tasks is the analytical rubric. Driven by three aims, this…
Descriptors: Oral Language, Speech Communication, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Celeste Combrinck; Nelé Loubser – Discover Education, 2025
Written assignments for large classes pose a far more significant challenge in the age of the GenAI revolution. Suggestions such as oral exams and formative assessments are not always feasible with many students in a class. Therefore, we conducted a study in South Africa and involved 280 Honors students to explore the usefulness of Turnitin's AI…
Descriptors: Foreign Countries, Artificial Intelligence, Large Group Instruction, Alternative Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Marwa Eltanahy; Nasser Mansour – Innovations in Education and Teaching International, 2025
The increasing emphasis on a competency-based learning approach in entrepreneurial-STEM (E-STEM) necessitates competency-based assessment tools to track students' entrepreneurial development and enhance the quality of E-STEM projects. This study aims to create a valid analytical rubric for assessing students' E-STEM competencies. Using a…
Descriptors: Scoring Rubrics, Student Evaluation, Competence, Entrepreneurship
Peer reviewed Peer reviewed
Direct linkDirect link
Zhiqiang Yang; Chengyuan Yu – Asia Pacific Education Review, 2025
This study investigated the test fairness of the translation section of a large-scale English test in China by examining its Differential Test Functioning (DTF) and Differential Item Functioning (DIF) across gender and major. Regarding DTF, the entire translation section exhibits partial strong measurement invariance across female and male…
Descriptors: Multiple Choice Tests, Test Items, Scoring, Translation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Georgios Zacharis; Stamatios Papadakis – Educational Process: International Journal, 2025
Background/purpose: Generative artificial intelligence (GenAI) is often promoted as a transformative tool for assessment, yet evidence of its validity compared to human raters remains limited. This study examined whether an AI-based rater could be used interchangeably with trained faculty in scoring complex coursework. Materials/methods:…
Descriptors: Artificial Intelligence, Technology Uses in Education, Computer Assisted Testing, Grading
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wyse, Adam E. – Practical Assessment, Research & Evaluation, 2018
One common modification to the Angoff standard-setting method is to have panelists round their ratings to the nearest 0.05 or 0.10 instead of 0.01. Several reasons have been offered as to why it may make sense to have panelists round their ratings to the nearest 0.05 or 0.10. In this article, we examine one reason that has been suggested, which is…
Descriptors: Interrater Reliability, Evaluation Criteria, Scoring Formulas, Achievement Rating
Pages: 1  |  ...  |  46  |  47  |  48  |  49  |  50  |  51  |  52  |  53  |  54  |  ...  |  673