NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 661 to 675 of 10,088 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gümüs, Muhammed Murat; Kayhan, Osman; Korkmaz, Özgen; Altun, Halis; Yilmaz, Nihat – International Online Journal of Primary Education, 2023
This study aims to create a rubric based on the pedagogical properties of educational robots for pre-school students and determine the compliance level with educational robot sets. In this sense, the study is considered a first and significant step toward selecting robots based on pedagogical-driven factors. For this aim, a mixed-method research…
Descriptors: Classification, Educational Technology, Robotics, Preschool Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yakob, Muhammad; Sari, Ratih Permana; Hasibuan, Molani Paulina; Nahadi, Nahadi; Anwar, Sjaeful; El Islami, R. Ahmad Zaky – Journal of Baltic Science Education, 2023
In the virtual laboratory learning process, students' scientific abilities in solving a problem are very important to explore. This study aims to develop classroom-based authentic assessment instruments through virtual laboratory learning in chemistry to see an increase in students' scientific performance. The research was conducted at a public…
Descriptors: Performance Based Assessment, Feasibility Studies, Electronic Learning, Laboratory Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Lockwood, Adam B.; Klatka, Kelsey; Freeman, Kelli; Farmer, Ryan L.; Benson, Nicholas – Journal of Psychoeducational Assessment, 2023
Sixty-three Woodcock-Johnson IV Tests of Achievement protocols, administered by 26 school psychology trainees, were examined to determine the frequency of examiner errors. Errors were noted on all protocols and ranged from 8 to 150 per administration. Critical (e.g., start, stop, and calculation) errors were noted on roughly 97% of protocols.…
Descriptors: Achievement Tests, School Psychology, Counselor Training, Trainees
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Joy Robbins; Milena Marinkova – Practitioner Research in Higher Education, 2023
While studies have extolled the value of using online rubrics, the benefits have usually been presented in terms of enhancing marking or delivery of teacher feedback. These benefits are welcome, but they nonetheless couch digital as simply an improved way for "old paradigm" transmission approaches to feedback that do little to help…
Descriptors: Scoring Rubrics, Barriers, Feedback (Response), Information Literacy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tahereh Firoozi; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The proliferation of large language models represents a paradigm shift in the landscape of automated essay scoring (AES) systems, fundamentally elevating their accuracy and efficacy. This study presents an extensive examination of large language models, with a particular emphasis on the transformative influence of transformer-based models, such as…
Descriptors: Turkish, Writing Evaluation, Essays, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Mary Tess Urbanek; Benjamin Moritz; Alena Moon – Chemistry Education Research and Practice, 2023
While uncertainty is inherent to doing science, it is often excluded from science instruction, especially postsecondary chemistry instruction. There are a variety of barriers to infusing uncertainty into the postsecondary chemistry classroom, including ensuring "productive" struggle with uncertainty, evaluating student engagement with…
Descriptors: Chemistry, Science Instruction, Student Attitudes, Persuasive Discourse
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Peer reviewed Peer reviewed
Direct linkDirect link
Panadero, Ernesto; Jonsson, Anders; Pinedo, Leire; Fernández-Castilla, Belén – Educational Psychology Review, 2023
Rubrics are widely used as instructional and learning instrument. Though they have been claimed to have positive effects on students' learning, these effects have not been meta-analyzed. Our aim was to synthesize the effects of rubrics on academic performance, self-regulated learning, and self-efficacy. The moderator effect of the following…
Descriptors: Scoring Rubrics, Academic Achievement, Self Management, Learning Strategies
Peer reviewed Peer reviewed
Direct linkDirect link
Gerard, Libby; Kidron, Ady; Linn, Marcia C. – International Journal of Computer-Supported Collaborative Learning, 2019
This paper illustrates how the combination of teacher and computer guidance can strengthen collaborative revision and identifies opportunities for teacher guidance in a computer-supported collaborative learning environment. We took advantage of natural language processing tools embedded in an online, collaborative environment to automatically…
Descriptors: Computer Assisted Testing, Student Evaluation, Science Tests, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Haoran; Litman, Diane – Grantee Submission, 2017
Manually grading the Response to Text Assessment (RTA) is labor intensive. Therefore, an automatic method is being developed for scoring analytical writing when the RTA is administered in large numbers of classrooms. Our long-term goal is to also use this scoring method to provide formative feedback to students and teachers about students' writing…
Descriptors: Automation, Scoring, Evidence, Scoring Rubrics
Lichtenstein, Robert – Communique, 2020
A neuropsychologist describes a child's performance on a measure of short-term verbal memory as falling in the low average range. Another neuropsychologist reports that a child scored in the below average range. A third neuropsychologist describes a child's performance as mildly impaired. Yet, all three are referring to the same score on the same…
Descriptors: Scores, Neuropsychology, Short Term Memory, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Bimpeh, Yaw; Pointer, William; Smith, Ben Alexander; Harrison, Liz – Applied Measurement in Education, 2020
Many high-stakes examinations in the United Kingdom (UK) use both constructed-response items and selected-response items. We need to evaluate the inter-rater reliability for constructed-response items that are scored by humans. While there are a variety of methods for evaluating rater consistency across ratings in the psychometric literature, we…
Descriptors: Scoring, Generalizability Theory, Interrater Reliability, Foreign Countries
Hauk, Shandy; Kaser, Joyce – American Journal of Evaluation, 2020
This brief report describes the conception, development, and use of a rubric in evaluating the feasibility of a new program. The evaluators searched for a meta-analytic tool to help organize ideas about what data to collect, and why, in order to create a detailed story of feasibility of implementation for the client. The main advantage of using…
Descriptors: Scoring Rubrics, Program Implementation, Program Evaluation, Feasibility Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Skaggs, Gary; Hein, Serge F.; Wilkins, Jesse L. M. – Educational Measurement: Issues and Practice, 2020
In test-centered standard-setting methods, borderline performance can be represented by many different profiles of strengths and weaknesses. As a result, asking panelists to estimate item or test performance for a hypothetical group study of borderline examinees, or a typical borderline examinee, may be an extremely difficult task and one that can…
Descriptors: Standard Setting (Scoring), Cutting Scores, Testing Problems, Profiles
Pages: 1  |  ...  |  41  |  42  |  43  |  44  |  45  |  46  |  47  |  48  |  49  |  ...  |  673