Publication Date
In 2025 | 47 |
Since 2024 | 246 |
Since 2021 (last 5 years) | 703 |
Since 2016 (last 10 years) | 1062 |
Since 2006 (last 20 years) | 1412 |
Descriptor
Source
Author
Danielle S. McNamara | 18 |
McNamara, Danielle S. | 18 |
Allen, Laura K. | 10 |
Attali, Yigal | 9 |
Laura K. Allen | 9 |
Crossley, Scott A. | 8 |
Linn, Marcia C. | 8 |
Wilson, Joshua | 8 |
Bejar, Isaac I. | 7 |
Burstein, Jill | 7 |
Mihai Dascalu | 7 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 90 |
Teachers | 50 |
Administrators | 29 |
Researchers | 20 |
Policymakers | 19 |
Students | 11 |
Community | 3 |
Counselors | 2 |
Media Staff | 2 |
Support Staff | 1 |
Location
Canada | 43 |
China | 39 |
Germany | 33 |
Australia | 31 |
United States | 30 |
California | 27 |
Japan | 27 |
Spain | 25 |
United Kingdom | 21 |
Turkey | 20 |
Sweden | 15 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 1 |
Meets WWC Standards with or without Reservations | 1 |
Sungbok Shin – ProQuest LLC, 2024
Data visualization is a powerful strategy for using graphics to represent data for effective communication and analysis. Unfortunately, creating effective data visualizations is a challenge for both novice and expert design users. The task often involves an iterative process of trial and error, which by its nature, is time-consuming. Designers…
Descriptors: Artificial Intelligence, Computer Simulation, Visualization, Feedback (Response)
Blaženka Divjak; Barbi Svetec; Damir Horvat – Journal of Computer Assisted Learning, 2024
Background: Sound learning design should be based on the constructive alignment of intended learning outcomes (LOs), teaching and learning activities and formative and summative assessment. Assessment validity strongly relies on its alignment with LOs. Valid and reliable formative assessment can be analysed as a predictor of students' academic…
Descriptors: Automation, Formative Evaluation, Test Validity, Test Reliability
Hosnia M. M. Ahmed; Shaymaa E. Sorour – Education and Information Technologies, 2024
Evaluating the quality of university exam papers is crucial for universities seeking institutional and program accreditation. Currently, exam papers are assessed manually, a process that can be tedious, lengthy, and in some cases, inconsistent. This is often due to the focus on assessing only the formal specifications of exam papers. This study…
Descriptors: Higher Education, Artificial Intelligence, Writing Evaluation, Natural Language Processing
Héctor J. Pijeira-Díaz; Sophia Braumann; Janneke van de Pol; Tamara van Gog; Anique B. H. Bruin – British Journal of Educational Technology, 2024
Advances in computational language models increasingly enable adaptive support for self-regulated learning (SRL) in digital learning environments (DLEs; eg, via automated feedback). However, the accuracy of those models is a common concern for educational stakeholders (eg, policymakers, researchers, teachers and learners themselves). We compared…
Descriptors: Computational Linguistics, Independent Study, Secondary School Students, Causal Models
Bin Tan; Hao-Yue Jin; Maria Cutumisu – Computer Science Education, 2024
Background and Context: Computational thinking (CT) has been increasingly added to K-12 curricula, prompting teachers to grade more and more CT artifacts. This has led to a rise in automated CT assessment tools. Objective: This study examines the scope and characteristics of publications that use machine learning (ML) approaches to assess…
Descriptors: Computation, Thinking Skills, Artificial Intelligence, Student Evaluation
Abdulkadir Kara; Eda Saka Simsek; Serkan Yildirim – Asian Journal of Distance Education, 2024
Evaluation is an essential component of the learning process when discerning learning situations. Assessing natural language responses, like short answers, takes time and effort. Artificial intelligence and natural language processing advancements have led to more studies on automatically grading short answers. In this review, we systematically…
Descriptors: Automation, Natural Language Processing, Artificial Intelligence, Grading
Transparency Improves the Accuracy of Automation Use, but Automation Confidence Information Does Not
Monica Tatasciore; Luke Strickland; Shayne Loft – Cognitive Research: Principles and Implications, 2024
Increased automation transparency can improve the accuracy of automation use but can lead to increased bias towards agreeing with advice. Information about the automation's confidence in its advice may also increase the predictability of automation errors. We examined the effects of providing automation transparency, automation confidence…
Descriptors: Automation, Access to Information, Information Technology, Bias
Walter Gander – Informatics in Education, 2024
When the new programming language Pascal was developed in the 1970's, Walter Gander did not like it because because many features which he appreciated in prior programming languages were missing in Pascal. For example the block structure was gone, there were no dynamical arrays, no functions or procedures were allowed as parameters of a procedure,…
Descriptors: Computer Software, Programming Languages, Algorithms, Automation
Ulrike Padó; Yunus Eryilmaz; Larissa Kirschner – International Journal of Artificial Intelligence in Education, 2024
Short-Answer Grading (SAG) is a time-consuming task for teachers that automated SAG models have long promised to make easier. However, there are three challenges for their broad-scale adoption: A technical challenge regarding the need for high-quality models, which is exacerbated for languages with fewer resources than English; a usability…
Descriptors: Grading, Automation, Test Format, Computer Assisted Testing
Leila Ouahrani; Djamal Bennouar – International Journal of Artificial Intelligence in Education, 2024
We consider the reference-based approach for Automatic Short Answer Grading (ASAG) that involves scoring a textual constructed student answer comparing to a teacher-provided reference answer. The reference answer does not cover the variety of student answers as it contains only specific examples of correct answers. Considering other language…
Descriptors: Grading, Automation, Answer Keys, Tests
Hossein Kermani; Alireza Bayat Makou; Amirali Tafreshi; Amir Mohamad Ghodsi; Ali Atashzar; Ali Nojoumi – International Journal of Social Research Methodology, 2024
Despite the increasing adaption of automated text analysis in communication studies, its strengths and weaknesses in framing analysis are so far unknown. Fewer efforts have been made to automatic detection of networked frames. Drawing on the recent developments in this field, we harness a comparative exploration, using Latent Dirichlet Allocation…
Descriptors: COVID-19, Pandemics, Automation, Foreign Countries
Steven Ullman – ProQuest LLC, 2024
Modern Information Technology (IT) infrastructure and open-source software (OSS) have revolutionized our ability to access and process data, enabling us to tackle increasingly complex problems and challenges. While these technologies provide substantial benefits, they often expose users to vulnerabilities that can severely damage individuals and…
Descriptors: Artificial Intelligence, Information Technology, Information Systems, Computer Security
Putnikovic, Marko; Jovanovic, Jelena – IEEE Transactions on Learning Technologies, 2023
Automatic grading of short answers is an important task in computer-assisted assessment (CAA). Recently, embeddings, as semantic-rich textual representations, have been increasingly used to represent short answers and predict the grade. Despite the recent trend of applying embeddings in automatic short answer grading (ASAG), there are no…
Descriptors: Automation, Computer Assisted Testing, Grading, Natural Language Processing
Ren, Ping; Yang, Liu; Luo, Fang – Education and Information Technologies, 2023
Student feedback is crucial for evaluating the performance of teachers and the quality of teaching. Free-form text comments obtained from open-ended questions are seldom analyzed comprehensively since it is difficult to interpret and score compared to standardized rating scales. To solve this problem, the present study employed aspect-level…
Descriptors: Student Attitudes, Student Evaluation of Teacher Performance, Feedback (Response), Prediction
Tan, Hongye; Wang, Chong; Duan, Qinglong; Lu, Yu; Zhang, Hu; Li, Ru – Interactive Learning Environments, 2023
Automatic short answer grading (ASAG) is a challenging task that aims to predict a score for a given student response. Previous works on ASAG mainly use nonneural or neural methods. However, the former depends on handcrafted features and is limited by its inflexibility and high cost, and the latter ignores global word cooccurrence in a corpus and…
Descriptors: Automation, Grading, Computer Assisted Testing, Graphs