Publication Date
In 2025 | 0 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 11 |
Since 2016 (last 10 years) | 18 |
Since 2006 (last 20 years) | 20 |
Descriptor
Source
International Journal of… | 20 |
Author
Perin, Dolores | 2 |
Al-Emari, Salam | 1 |
Anna Filighera | 1 |
Archana Praveen Kumar | 1 |
Ashalatha Nayak | 1 |
Barrett, Michelle D. | 1 |
Buckingham Shum, Simon | 1 |
Chaitanya | 1 |
Clayton Cohn | 1 |
Conejo, Ricardo | 1 |
Cosyn, Eric | 1 |
More ▼ |
Publication Type
Journal Articles | 20 |
Reports - Research | 15 |
Reports - Evaluative | 3 |
Information Analyses | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 4 |
Elementary Education | 2 |
Grade 7 | 1 |
Grade 8 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Australia | 1 |
Israel | 1 |
Netherlands | 1 |
Singapore | 1 |
Spain | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Raven Progressive Matrices | 1 |
What Works Clearinghouse Rating
Archana Praveen Kumar; Ashalatha Nayak; Manjula Shenoy K.; Chaitanya; Kaustav Ghosh – International Journal of Artificial Intelligence in Education, 2024
Multiple Choice Questions (MCQs) are a popular assessment method because they enable automated evaluation, flexible administration and use with huge groups. Despite these benefits, the manual construction of MCQs is challenging, time-consuming and error-prone. This is because each MCQ is comprised of a question called the "stem", a…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Semantics
Ulrike Padó; Yunus Eryilmaz; Larissa Kirschner – International Journal of Artificial Intelligence in Education, 2024
Short-Answer Grading (SAG) is a time-consuming task for teachers that automated SAG models have long promised to make easier. However, there are three challenges for their broad-scale adoption: A technical challenge regarding the need for high-quality models, which is exacerbated for languages with fewer resources than English; a usability…
Descriptors: Grading, Automation, Test Format, Computer Assisted Testing
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Rebecka Weegar; Peter Idestam-Almquist – International Journal of Artificial Intelligence in Education, 2024
Machine learning methods can be used to reduce the manual workload in exam grading, making it possible for teachers to spend more time on other tasks. However, when it comes to grading exams, fully eliminating manual work is not yet possible even with very accurate automated grading, as any grading mistakes could have significant consequences for…
Descriptors: Grading, Computer Assisted Testing, Introductory Courses, Computer Science Education
Anna Filighera; Sebastian Ochs; Tim Steuer; Thomas Tregel – International Journal of Artificial Intelligence in Education, 2024
Automatic grading models are valued for the time and effort saved during the instruction of large student bodies. Especially with the increasing digitization of education and interest in large-scale standardized testing, the popularity of automatic grading has risen to the point where commercial solutions are widely available and used. However,…
Descriptors: Cheating, Grading, Form Classes (Languages), Computer Software
Barrett, Michelle D.; Jiang, Bingnan; Feagler, Bridget E. – International Journal of Artificial Intelligence in Education, 2022
The appeal of a shorter testing time makes a computer adaptive testing approach highly desirable for use in multiple assessment and learning contexts. However, for those who have been tasked with designing, configuring, and deploying adaptive tests for operational use at scale, preparing an adaptive test is anything but simple. The process often…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Construction, Design Requirements
Ormerod, Christopher; Lottridge, Susan; Harris, Amy E.; Patel, Milan; van Wamelen, Paul; Kodeswaran, Balaji; Woolf, Sharon; Young, Mackenzie – International Journal of Artificial Intelligence in Education, 2023
We introduce a short answer scoring engine made up of an ensemble of deep neural networks and a Latent Semantic Analysis-based model to score short constructed responses for a large suite of questions from a national assessment program. We evaluate the performance of the engine and show that the engine achieves above-human-level performance on a…
Descriptors: Computer Assisted Testing, Scoring, Artificial Intelligence, Semantics
Schneider, Johannes; Richner, Robin; Riser, Micha – International Journal of Artificial Intelligence in Education, 2023
Autograding short textual answers has become much more feasible due to the rise of NLP and the increased availability of question-answer pairs brought about by a shift to online education. Autograding performance is still inferior to human grading. The statistical and black-box nature of state-of-the-art machine learning models makes them…
Descriptors: Grading, Natural Language Processing, Computer Assisted Testing, Ethics
Keith Cochran; Clayton Cohn; Peter Hastings; Noriko Tomuro; Simon Hughes – International Journal of Artificial Intelligence in Education, 2024
To succeed in the information age, students need to learn to communicate their understanding of complex topics effectively. This is reflected in both educational standards and standardized tests. To improve their writing ability for highly structured domains like scientific explanations, students need feedback that accurately reflects the…
Descriptors: Science Process Skills, Scientific Literacy, Scientific Concepts, Concept Formation
Doble, Christopher; Matayoshi, Jeffrey; Cosyn, Eric; Uzun, Hasan; Karami, Arash – International Journal of Artificial Intelligence in Education, 2019
A large-scale simulation study of the assessment effectiveness of a particular instantiation of knowledge space theory is described. In this study, data from more than 700,000 actual assessments in mathematics using the ALEKS (Assessment and LEarning in Knowledge Spaces) software were used to determine response probabilities for the same number of…
Descriptors: Test Reliability, Adaptive Testing, Mathematics Tests, Computer Assisted Testing
Kurdi, Ghader; Leo, Jared; Parsia, Bijan; Sattler, Uli; Al-Emari, Salam – International Journal of Artificial Intelligence in Education, 2020
While exam-style questions are a fundamental educational tool serving a variety of purposes, manual construction of questions is a complex process that requires training, experience, and resources. This, in turn, hinders and slows down the use of educational activities (e.g. providing practice questions) and new advances (e.g. adaptive testing)…
Descriptors: Computer Assisted Testing, Adaptive Testing, Natural Language Processing, Questioning Techniques
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Vittorini, Pierpaolo; Menini, Stefano; Tonelli, Sara – International Journal of Artificial Intelligence in Education, 2021
Massive open online courses (MOOCs) provide hundreds of students with teaching materials, assessment tools, and collaborative instruments. The assessment activity, in particular, is demanding in terms of both time and effort; thus, the use of artificial intelligence can be useful to address and reduce the time and effort required. This paper…
Descriptors: Artificial Intelligence, Formative Evaluation, Summative Evaluation, Data
Passonneau, Rebecca J.; Poddar, Ananya; Gite, Gaurav; Krivokapic, Alisa; Yang, Qian; Perin, Dolores – International Journal of Artificial Intelligence in Education, 2018
Development of reliable rubrics for educational intervention studies that address reading and writing skills is labor-intensive, and could benefit from an automated approach. We compare a main ideas rubric used in a successful writing intervention study to a highly reliable wise-crowd content assessment method developed to evaluate…
Descriptors: Computer Assisted Testing, Writing Evaluation, Content Analysis, Scoring Rubrics
Perin, Dolores; Lauterbach, Mark – International Journal of Artificial Intelligence in Education, 2018
The problem of poor writing skills at the postsecondary level is a large and troubling one. This study investigated the writing skills of low-skilled adults attending college developmental education courses by determining whether variables from an automated scoring system were predictive of human scores on writing quality rubrics. The human-scored…
Descriptors: College Students, Writing Evaluation, Writing Skills, Developmental Studies Programs
Previous Page | Next Page »
Pages: 1 | 2