NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2025
Automated multiple-choice question (MCQ) generation is valuable for scalable assessment and enhanced learning experiences. How-ever, existing MCQ generation methods face challenges in ensuring plausible distractors and maintaining answer consistency. This paper intro-duces a method for MCQ generation that integrates reasoning-based explanations…
Descriptors: Automation, Computer Assisted Testing, Multiple Choice Tests, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Wesley Morris; Langdon Holmes; Joon Suh Choi; Scott Crossley – International Journal of Artificial Intelligence in Education, 2025
Recent developments in the field of artificial intelligence allow for improved performance in the automated assessment of extended response items in mathematics, potentially allowing for the scoring of these items cheaply and at scale. This study details the grand prize-winning approach to developing large language models (LLMs) to automatically…
Descriptors: Automation, Computer Assisted Testing, Mathematics Tests, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Mathias Benedek; Roger E. Beaty – Journal of Creative Behavior, 2025
The PISA assessment 2022 of creative thinking was a moonshot effort that introduced significant advancements over existing creativity tests, including a broad range of domains (written, visual, social, and scientific), implementation in many languages, and sophisticated scoring methods. PISA 2022 demonstrated the general feasibility of assessing…
Descriptors: Creative Thinking, Creativity, Creativity Tests, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Jussi S. Jauhiainen; Agustín Garagorry Guerra – Innovations in Education and Teaching International, 2025
The study highlights ChatGPT-4's potential in educational settings for the evaluation of university students' open-ended written examination responses. ChatGPT-4 evaluated 54 written responses, ranging from 24 to 256 words in English. It assessed each response using five criteria and assigned a grade on a six-point scale from fail to excellent,…
Descriptors: Artificial Intelligence, Technology Uses in Education, Student Evaluation, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Andrew Runge; Sarah Goodwin; Yigal Attali; Mya Poe; Phoebe Mulcaire; Kai-Ling Lo; Geoffrey T. LaFlair – Language Testing, 2025
A longstanding criticism of traditional high-stakes writing assessments is their use of static prompts in which test takers compose a single text in response to a prompt. These static prompts do not allow measurement of the writing process. This paper describes the development and validation of an innovative interactive writing task. After the…
Descriptors: Material Development, Writing Evaluation, Writing Assignments, Writing Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Saida Ulfa; Ence Surahman; Agus Wedi; Izzul Fatawi; Rex Bringula – Knowledge Management & E-Learning, 2025
Online assessment is one of the important factors in online learning today. An online summary assessment is an example of an open-ended question, offering the advantage of probing students' understanding of the learning materials. However, grading students' summary writings is challenging due to the time-consuming process of evaluating students'…
Descriptors: Knowledge Management, Automation, Documentation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Dan Song; Alexander F. Tang – Language Learning & Technology, 2025
While many studies have addressed the benefits of technology-assisted L2 writing, limited research has delved into how generative artificial intelligence (GAI) supports students in completing their writing tasks in Mandarin Chinese. In this study, 26 university-level Mandarin Chinese foreign language students completed two writing tasks on two…
Descriptors: Artificial Intelligence, Second Language Learning, Standardized Tests, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Naveed Saif; Sadaqat Ali; Abner Rubin; Soliman Aljarboa; Nabil Sharaf Almalki; Mrim M. Alnfiai; Faheem Khan; Sajid Ullah Khan – Educational Technology & Society, 2025
In the swiftly evolving landscape of education, the fusion of Artificial Intelligence's ingenuity with the dynamic capabilities of chat-bot technology has ignited a transformative paradigm shift. This convergence is not merely a technological integration but a profound reshaping of the fundamental principles of pedagogy, fundamentally redefining…
Descriptors: Artificial Intelligence, Technology Uses in Education, Readiness, Technological Literacy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qiao Wang; Ralph L. Rose; Ayaka Sugawara; Naho Orita – Vocabulary Learning and Instruction, 2025
VocQGen is an automated tool designed to generate multiple-choice cloze (MCC) questions for vocabulary assessment in second language learning contexts. It leverages several natural language processing (NLP) tools and OpenAI's GPT-4 model to produce MCC items quickly from user-specified word lists. To evaluate its effectiveness, we used the first…
Descriptors: Vocabulary Skills, Artificial Intelligence, Computer Software, Multiple Choice Tests