NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20258
Since 202415
Laws, Policies, & Programs
Assessments and Surveys
Test of English for…1
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bin Tan; Nour Armoush; Elisabetta Mazzullo; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2025
This study reviews existing research on the use of large language models (LLMs) for automatic item generation (AIG). We performed a comprehensive literature search across seven research databases, selected studies based on predefined criteria, and summarized 60 relevant studies that employed LLMs in the AIG process. We identified the most commonly…
Descriptors: Artificial Intelligence, Test Items, Automation, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Monica Casella; Pasquale Dolce; Michela Ponticorvo; Nicola Milano; Davide Marocco – Educational and Psychological Measurement, 2024
Short-form development is an important topic in psychometric research, which requires researchers to face methodological choices at different steps. The statistical techniques traditionally used for shortening tests, which belong to the so-called exploratory model, make assumptions not always verified in psychological data. This article proposes a…
Descriptors: Artificial Intelligence, Test Construction, Test Format, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Brian E. Clauser; Victoria Yaneva; Peter Baldwin; Le An Ha; Janet Mee – Applied Measurement in Education, 2024
Multiple-choice questions have become ubiquitous in educational measurement because the format allows for efficient and accurate scoring. Nonetheless, there remains continued interest in constructed-response formats. This interest has driven efforts to develop computer-based scoring procedures that can accurately and efficiently score these items.…
Descriptors: Computer Uses in Education, Artificial Intelligence, Scoring, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Peer reviewed Peer reviewed
Direct linkDirect link
Yusuf Oc; Hela Hassen – Marketing Education Review, 2025
Driven by technological innovations, continuous digital expansion has transformed fundamentally the landscape of modern higher education, leading to discussions about evaluation techniques. The emergence of generative artificial intelligence raises questions about reliability and academic honesty regarding multiple-choice assessments in online…
Descriptors: Higher Education, Multiple Choice Tests, Computer Assisted Testing, Electronic Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Rebecka Weegar; Peter Idestam-Almquist – International Journal of Artificial Intelligence in Education, 2024
Machine learning methods can be used to reduce the manual workload in exam grading, making it possible for teachers to spend more time on other tasks. However, when it comes to grading exams, fully eliminating manual work is not yet possible even with very accurate automated grading, as any grading mistakes could have significant consequences for…
Descriptors: Grading, Computer Assisted Testing, Introductory Courses, Computer Science Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tugra Karademir Coskun; Ayfer Alper – Digital Education Review, 2024
This study aims to examine the potential differences between teacher evaluations and artificial intelligence (AI) tool-based assessment systems in university examinations. The research has evaluated a wide spectrum of exams including numerical and verbal course exams, exams with different assessment styles (project, test exam, traditional exam),…
Descriptors: Artificial Intelligence, Visual Aids, Video Technology, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Julia Jochim; Vera Kristina Lenz-Kesekamp – Information and Learning Sciences, 2025
Purpose: Large language models such as ChatGPT are a challenge to academic principles, calling into question well-established practices, teaching and exam formats. This study aims to explore the adaptation process regarding text-generative artificial intelligence (AI) of students and teachers in higher education and to identify needs for change.…
Descriptors: Artificial Intelligence, Student Needs, Higher Education, Technology Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Kyung-Mi O. – Language Testing in Asia, 2024
This study examines the efficacy of artificial intelligence (AI) in creating parallel test items compared to human-made ones. Two test forms were developed: one consisting of 20 existing human-made items and another with 20 new items generated with ChatGPT assistance. Expert reviews confirmed the content parallelism of the two test forms.…
Descriptors: Comparative Analysis, Artificial Intelligence, Computer Software, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Goran Trajkovski; Heather Hayes – Digital Education and Learning, 2025
This book explores the transformative role of artificial intelligence in educational assessment, catering to researchers, educators, administrators, policymakers, and technologists involved in shaping the future of education. It delves into the foundations of AI-assisted assessment, innovative question types and formats, data analysis techniques,…
Descriptors: Artificial Intelligence, Educational Assessment, Computer Uses in Education, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Yizhu Gao; Xiaoming Zhai; Min Li; Gyeonggeon Lee; Xiaoxiao Liu – Grantee Submission, 2025
The rapid evolution of generative artificial intelligence (GenAI) is transforming science education by facilitating innovative pedagogical paradigms while raising substantial concerns about scholarly integrity. One particularly pressing issue is the growing risk of student use of GenAI tools to outsource assessment tasks, potentially compromising…
Descriptors: Artificial Intelligence, Computer Software, Science Education, Integrity
Peer reviewed Peer reviewed
Direct linkDirect link
Neha Biju; Nasser Said Gomaa Abdelrasheed; Khilola Bakiyeva; K. D. V. Prasad; Biruk Jember – Language Testing in Asia, 2024
In recent years, language practitioners have paid increasing attention to artificial intelligence (AI)'s role in language programs. This study investigated the impact of AI-assisted language assessment on L2 learners' foreign language anxiety (FLA), attitudes, motivation, and writing skills. The study adopted a sequential exploratory mixed-methods…
Descriptors: Artificial Intelligence, Computer Software, Computer Assisted Testing, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Abdullah Al Fraidan – International Journal of Distance Education Technologies, 2025
This study explores vocabulary assessment practices in Saudi Arabia's hybrid EFL ecosystem, leveraging platforms like Blackboard and Google Forms. The focus is on identifying prevalent test formats and evaluating their alignment with modern pedagogical goals. To classify vocabulary assessment formats in hybridized EFL contexts and recommend the…
Descriptors: Vocabulary Development, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Patrick Kyllonen; Amit Sevak; Teresa Ober; Ikkyu Choi; Jesse Sparks; Daniel Fishtein – ETS Research Report Series, 2024
Assessment refers to a broad array of approaches for measuring or evaluating a person's (or group of persons') skills, behaviors, dispositions, or other attributes. Assessments range from standardized tests used in admissions, employee selection, licensure examinations, and domestic and international large-scale assessments of cognitive and…
Descriptors: Assessment Literacy, Testing, Test Bias, Test Construction