NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Descriptive16
Journal Articles15
Speeches/Meeting Papers1
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
Assessments and Surveys
Program for International…1
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Saha, Sujan Kumar; Rao C. H., Dhawaleswar – Interactive Learning Environments, 2022
Assessment plays an important role in education. Recently proposed machine learning-based systems for answer grading demand a large training data which is not available in many application areas. Creation of sufficient training data is costly and time-consuming. As a result, automatic long answer grading is still a challenge. In this paper, we…
Descriptors: Middle School Students, Grading, Artificial Intelligence, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Ricardo Conejo Muñoz; Beatriz Barros Blanco; José del Campo-Ávila; José L. Triviño Rodriguez – IEEE Transactions on Learning Technologies, 2024
Automatic question generation and the assessment of procedural knowledge is still a challenging research topic. This article focuses on the case of it, the techniques of parsing grammars for compiler construction. There are two well-known techniques for parsing: top-down parsing with LL(1) and bottom-up with LR(1). Learning these techniques and…
Descriptors: Automation, Questioning Techniques, Knowledge Level, Language
Peer reviewed Peer reviewed
Direct linkDirect link
Anita Pásztor-Kovács; Attila Pásztor; Gyöngyvér Molnár – Interactive Learning Environments, 2023
In this paper, we present an agenda for the research directions we recommend in addressing the issues of realizing and evaluating communication in CPS instruments. We outline our ideas on potential ways to improve: (1) generalizability in Human-Human assessment tools and ecological validity in Human-Agent ones; (2) flexible and convenient use of…
Descriptors: Cooperation, Problem Solving, Evaluation Methods, Teamwork
Peer reviewed Peer reviewed
Direct linkDirect link
Tavares, Paula Correia; Gomes, Elsa Ferreira; Henriques, Pedro Rangel; Vieira, Diogo Manuel – Open Education Studies, 2022
Computer Programming Learners usually fail to get approved in introductory courses because solving problems using computers is a complex task. The most important reason for that failure is concerned with motivation; motivation strongly impacts on the learning process. In this paper we discuss how techniques like program animation, and automatic…
Descriptors: Learner Engagement, Programming, Computer Science Education, Problem Solving
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Malik, Ali; Wu, Mike; Vasavada, Vrinda; Song, Jinpeng; Coots, Madison; Mitchell, John; Goodman, Noah; Piech, Chris – International Educational Data Mining Society, 2021
Access to high-quality education at scale is limited by the difficulty of providing student feedback on open-ended assignments in structured domains like programming, graphics, and short response questions. This problem has proven to be exceptionally difficult: for humans, it requires large amounts of manual work, and for computers, until…
Descriptors: Grading, Accuracy, Computer Assisted Testing, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Tchoua, Roselyne B.; Qin, Jian; Audus, Debra J.; Chard, Kyle; Foster, Ian T.; de Pablo, Juan – Journal of Chemical Education, 2016
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The…
Descriptors: Science Education, Chemistry, Thermodynamics, Databases
Peer reviewed Peer reviewed
Direct linkDirect link
Farias, Gonzalo; Muñoz de la Peña, David; Gómez-Estern, Fabio; De la Torre, Luis; Sánchez, Carlos; Dormido, Sebastián – Interactive Learning Environments, 2016
Automatic evaluation is a challenging field that has been addressed by the academic community in order to reduce the assessment workload. In this work we present a new element for the authoring tool Easy Java Simulations (EJS). This element, which is named automatic evaluation element (AEE), provides automatic evaluation to virtual and remote…
Descriptors: Interaction, Laboratories, Distance Education, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Haudek, Kevin C.; Kaplan, Jennifer J.; Knight, Jennifer; Long, Tammy; Merrill, John; Munn, Alan; Nehm, Ross; Smith, Michelle; Urban-Lurain, Mark – CBE - Life Sciences Education, 2011
Concept inventories, consisting of multiple-choice questions designed around common student misconceptions, are designed to reveal student thinking. However, students often have complex, heterogeneous ideas about scientific concepts. Constructed-response assessments, in which students must create their own answer, may better reveal students'…
Descriptors: STEM Education, Student Evaluation, Formative Evaluation, Scientific Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Fernandez Aleman, J. L. – IEEE Transactions on Education, 2011
Automated assessment systems can be useful for both students and instructors. Ranking and immediate feedback can have a strongly positive effect on student learning. This paper presents an experience using automatic assessment in a programming tools course. The proposal aims at extending the traditional use of an online judging system with a…
Descriptors: Programming, Computer Science Education, College Students, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Ayres, Karen L.; Underwood, Fiona M. – Bioscience Education, 2010
We describe the main features of a program written to perform electronic marking of quantitative or simple text questions. One of the main benefits is that it can check answers for being consistent with earlier errors, so can cope with a range of numerical questions. We summarise our experience of using it in a statistics course taught to 200…
Descriptors: College Students, College Science, Biology, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Georgouli, Katerina; Guerreiro, Pedro – International Journal on E-Learning, 2011
This paper presents the successful integration of the evaluation engine of Mooshak into the open source learning management system Claroline. Mooshak is an open source online automatic judge that has been used for international and national programming competitions. although it was originally designed for programming competitions, Mooshak has also…
Descriptors: Foreign Countries, Electronic Learning, Programming, Internet
Bolch, Matt – Technology & Learning, 2009
The ever-increasing standards of No Child Left Behind regulations and various state assessments have put more pressure on teachers and administrators to monitor the learning process. Fortunately, the advent of technology is allowing teachers to test more often to prepare students for high-stakes tests and for districts to understand results for…
Descriptors: Federal Legislation, High Stakes Tests, Data Analysis, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Gutierrez, Eladio; Trenas, Maria A.; Ramos, Julian; Corbera, Francisco; Romero, Sergio – Computers & Education, 2010
This work describes a new "Moodle" module developed to give support to the practical content of a basic computer organization course. This module goes beyond the mere hosting of resources and assignments. It makes use of an automatic checking and verification engine that works on the VHDL designs submitted by the students. The module automatically…
Descriptors: Assignments, Teamwork, Units of Study, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Previous Page | Next Page »
Pages: 1  |  2