Publication Date
In 2025 | 4 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 9 |
Since 2016 (last 10 years) | 15 |
Since 2006 (last 20 years) | 44 |
Descriptor
Computer Assisted Testing | 48 |
Evaluation Methods | 48 |
Grading | 48 |
Student Evaluation | 25 |
Foreign Countries | 23 |
Computer Software | 17 |
Educational Technology | 17 |
Feedback (Response) | 15 |
College Students | 12 |
Scoring | 10 |
Student Attitudes | 9 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Teachers | 2 |
Administrators | 1 |
Researchers | 1 |
Students | 1 |
Location
Australia | 4 |
United Kingdom | 4 |
Italy | 2 |
New Zealand | 2 |
Ecuador | 1 |
Finland | 1 |
France | 1 |
Germany | 1 |
Indonesia | 1 |
Pakistan | 1 |
Portugal | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Putnikovic, Marko; Jovanovic, Jelena – IEEE Transactions on Learning Technologies, 2023
Automatic grading of short answers is an important task in computer-assisted assessment (CAA). Recently, embeddings, as semantic-rich textual representations, have been increasingly used to represent short answers and predict the grade. Despite the recent trend of applying embeddings in automatic short answer grading (ASAG), there are no…
Descriptors: Automation, Computer Assisted Testing, Grading, Natural Language Processing
Jonas Flodén – British Educational Research Journal, 2025
This study compares how the generative AI (GenAI) large language model (LLM) ChatGPT performs in grading university exams compared to human teachers. Aspects investigated include consistency, large discrepancies and length of answer. Implications for higher education, including the role of teachers and ethics, are also discussed. Three…
Descriptors: College Faculty, Artificial Intelligence, Comparative Testing, Scoring
Wallace N. Pinto Jr.; Jinnie Shin – Journal of Educational Measurement, 2025
In recent years, the application of explainability techniques to automated essay scoring and automated short-answer grading (ASAG) models, particularly those based on transformer architectures, has gained significant attention. However, the reliability and consistency of these techniques remain underexplored. This study systematically investigates…
Descriptors: Automation, Grading, Computer Assisted Testing, Scoring
Schneider, Johannes; Richner, Robin; Riser, Micha – International Journal of Artificial Intelligence in Education, 2023
Autograding short textual answers has become much more feasible due to the rise of NLP and the increased availability of question-answer pairs brought about by a shift to online education. Autograding performance is still inferior to human grading. The statistical and black-box nature of state-of-the-art machine learning models makes them…
Descriptors: Grading, Natural Language Processing, Computer Assisted Testing, Ethics
Esteban Guevara Hidalgo – International Journal for Educational Integrity, 2025
The COVID-19 pandemic had a profound impact on education, forcing many teachers and students who were not used to online education to adapt to an unanticipated reality by improvising new teaching and learning methods. Within the realm of virtual education, the evaluation methods underwent a transformation, with some assessments shifting towards…
Descriptors: Foreign Countries, Higher Education, COVID-19, Pandemics
Celeste Combrinck; Nelé Loubser – Discover Education, 2025
Written assignments for large classes pose a far more significant challenge in the age of the GenAI revolution. Suggestions such as oral exams and formative assessments are not always feasible with many students in a class. Therefore, we conducted a study in South Africa and involved 280 Honors students to explore the usefulness of Turnitin's AI…
Descriptors: Foreign Countries, Artificial Intelligence, Large Group Instruction, Alternative Assessment
Rowlett, Peter – International Journal of Mathematical Education in Science and Technology, 2022
A partially-automated method of assessment is proposed, in which automated question setting is used to generate individualized versions of a coursework assignment, which is completed by students and marked by hand. This is designed to be (a) comparable to a traditional written coursework assignment in validity, in that complex and open-ended tasks…
Descriptors: Mathematics Education, College Mathematics, Computer Assisted Testing, Evaluation Methods
Mustafa, Faisal; Raisha, Siti – MEXTESOL Journal, 2021
Assessment of the learning process refers to assessing the quality of students' learning as they complete learning activities, such as how much time they spend reading materials, how many times they repeat quizzes when they get low scores, or whether their posts in a forum are helpful for other students. Assessment of process is more appropriate…
Descriptors: Evaluation Methods, Student Evaluation, English (Second Language), English Language Learners
Jim Webber – College Composition and Communication, 2017
Proponents of reframing argue that prophetic pragmatism entails redirecting contemporary education reforms. While this judgment may defend our professional standing, it overlooks the consequences of redirecting reform's appeals to global competition, which preclude public participation in defining the goals and measures of literacy education. This…
Descriptors: Evaluation Methods, Artificial Intelligence, Computer Assisted Testing, Grading
Çekiç, Ahmet; Bakla, Arif – International Online Journal of Education and Teaching, 2021
The Internet and the software stores for mobile devices come with a huge number of digital tools for any task, and those intended for digital formative assessment (DFA) have burgeoned exponentially in the last decade. These tools vary in terms of their functionality, pedagogical quality, cost, operating systems and so forth. Teachers and learners…
Descriptors: Formative Evaluation, Futures (of Society), Computer Assisted Testing, Guidance
Knight, Simon; Buckingham Shum, Simon; Ryan, Philippa; Sándor, Ágnes; Wang, Xiaolong – International Journal of Artificial Intelligence in Education, 2018
Research into the teaching and assessment of student writing shows that many students find academic writing a challenge to learn, with legal writing no exception. Improving the availability and quality of timely formative feedback is an important aim. However, the time-consuming nature of assessing writing makes it impractical for instructors to…
Descriptors: Writing Evaluation, Natural Language Processing, Legal Education (Professions), Undergraduate Students
Seifried, Eva; Lenhard, Wolfgang; Spinath, Birgit – Journal of Educational Computing Research, 2017
Writing essays and receiving feedback can be useful for fostering students' learning and motivation. When faced with large class sizes, it is desirable to identify students who might particularly benefit from feedback. In this article, we tested the potential of Latent Semantic Analysis (LSA) for identifying poor essays. A total of 14 teaching…
Descriptors: Computer Assisted Testing, Computer Software, Essays, Writing Evaluation
Nutbrown, Stephen; Higgins, Colin – Computer Science Education, 2016
This article explores the suitability of static analysis techniques based on the abstract syntax tree (AST) for the automated assessment of early/mid degree level programming. Focus is on fairness, timeliness and consistency of grades and feedback. Following investigation into manual marking practises, including a survey of markers, the assessment…
Descriptors: Programming, Grading, Evaluation Methods, Feedback (Response)
Thompson, Darrall G. – Journal of Learning Analytics, 2016
This paper attempts to address the possibility of real change after a hundred years of exam-based assessments that produce a single mark or grade as feedback on students' progress and abilities. It uses visual feedback and analysis of graduate attribute assessment to foreground the diversity of aspects of a student's performance across subject…
Descriptors: Evaluation Methods, Student Evaluation, Self Evaluation (Individuals), Feedback (Response)
Andjelic, Svetlana; Cekerevac, Zoran – Education and Information Technologies, 2014
This article presents the original model of the computer adaptive testing and grade formation, based on scientifically recognized theories. The base of the model is a personalized algorithm for selection of questions depending on the accuracy of the answer to the previous question. The test is divided into three basic levels of difficulty, and the…
Descriptors: Computer Assisted Testing, Educational Technology, Grades (Scholastic), Test Construction