Publication Date
In 2025 | 8 |
Descriptor
Artificial Intelligence | 8 |
Test Format | 8 |
Test Items | 4 |
Computer Assisted Testing | 3 |
Computer Software | 3 |
Evaluation Methods | 3 |
Foreign Countries | 3 |
College Students | 2 |
Data Analysis | 2 |
Higher Education | 2 |
Language Tests | 2 |
More ▼ |
Source
Digital Education and Learning | 1 |
Grantee Submission | 1 |
Information and Learning… | 1 |
International Journal of… | 1 |
International Journal of… | 1 |
Journal of Education and… | 1 |
Marketing Education Review | 1 |
Measurement:… | 1 |
Author
Abdullah Al Fraidan | 1 |
Ahmed Al - Badri | 1 |
Allan S. Cohen | 1 |
Bin Tan | 1 |
Elisabetta Mazzullo | 1 |
George Engelhard | 1 |
Goran Trajkovski | 1 |
Gyeonggeon Lee | 1 |
Heather Hayes | 1 |
Hela Hassen | 1 |
Jiawei Xiong | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 5 |
Reports - Evaluative | 2 |
Books | 1 |
Information Analyses | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 8 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Administrators | 1 |
Policymakers | 1 |
Researchers | 1 |
Teachers | 1 |
Location
Europe | 1 |
Oman | 1 |
Saudi Arabia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Bin Tan; Nour Armoush; Elisabetta Mazzullo; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2025
This study reviews existing research on the use of large language models (LLMs) for automatic item generation (AIG). We performed a comprehensive literature search across seven research databases, selected studies based on predefined criteria, and summarized 60 relevant studies that employed LLMs in the AIG process. We identified the most commonly…
Descriptors: Artificial Intelligence, Test Items, Automation, Test Format
Jiawei Xiong; George Engelhard; Allan S. Cohen – Measurement: Interdisciplinary Research and Perspectives, 2025
It is common to find mixed-format data results from the use of both multiple-choice (MC) and constructed-response (CR) questions on assessments. Dealing with these mixed response types involves understanding what the assessment is measuring, and the use of suitable measurement models to estimate latent abilities. Past research in educational…
Descriptors: Responses, Test Items, Test Format, Grade 8
Yusuf Oc; Hela Hassen – Marketing Education Review, 2025
Driven by technological innovations, continuous digital expansion has transformed fundamentally the landscape of modern higher education, leading to discussions about evaluation techniques. The emergence of generative artificial intelligence raises questions about reliability and academic honesty regarding multiple-choice assessments in online…
Descriptors: Higher Education, Multiple Choice Tests, Computer Assisted Testing, Electronic Learning
Julia Jochim; Vera Kristina Lenz-Kesekamp – Information and Learning Sciences, 2025
Purpose: Large language models such as ChatGPT are a challenge to academic principles, calling into question well-established practices, teaching and exam formats. This study aims to explore the adaptation process regarding text-generative artificial intelligence (AI) of students and teachers in higher education and to identify needs for change.…
Descriptors: Artificial Intelligence, Student Needs, Higher Education, Technology Uses in Education
Goran Trajkovski; Heather Hayes – Digital Education and Learning, 2025
This book explores the transformative role of artificial intelligence in educational assessment, catering to researchers, educators, administrators, policymakers, and technologists involved in shaping the future of education. It delves into the foundations of AI-assisted assessment, innovative question types and formats, data analysis techniques,…
Descriptors: Artificial Intelligence, Educational Assessment, Computer Uses in Education, Test Format
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Yizhu Gao; Xiaoming Zhai; Min Li; Gyeonggeon Lee; Xiaoxiao Liu – Grantee Submission, 2025
The rapid evolution of generative artificial intelligence (GenAI) is transforming science education by facilitating innovative pedagogical paradigms while raising substantial concerns about scholarly integrity. One particularly pressing issue is the growing risk of student use of GenAI tools to outsource assessment tasks, potentially compromising…
Descriptors: Artificial Intelligence, Computer Software, Science Education, Integrity
Abdullah Al Fraidan – International Journal of Distance Education Technologies, 2025
This study explores vocabulary assessment practices in Saudi Arabia's hybrid EFL ecosystem, leveraging platforms like Blackboard and Google Forms. The focus is on identifying prevalent test formats and evaluating their alignment with modern pedagogical goals. To classify vocabulary assessment formats in hybridized EFL contexts and recommend the…
Descriptors: Vocabulary Development, English (Second Language), Second Language Learning, Second Language Instruction