Publication Date
In 2025 | 10 |
Descriptor
Source
Author
Abdullah Al Fraidan | 1 |
Abdullah Ali Khan | 1 |
Abner Rubin | 1 |
Ayaka Sugawara | 1 |
Bhashithe Abeysinghe | 1 |
Chenglu Li | 1 |
Congning Ni | 1 |
Esteban Guevara Hidalgo | 1 |
Faheem Khan | 1 |
Hai Li | 1 |
Heather Mahy | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Research | 10 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 6 |
Postsecondary Education | 6 |
Junior High Schools | 2 |
Middle Schools | 2 |
Secondary Education | 2 |
Elementary Education | 1 |
Grade 8 | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
International English… | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Yusuf Oc; Hela Hassen – Marketing Education Review, 2025
Driven by technological innovations, continuous digital expansion has transformed fundamentally the landscape of modern higher education, leading to discussions about evaluation techniques. The emergence of generative artificial intelligence raises questions about reliability and academic honesty regarding multiple-choice assessments in online…
Descriptors: Higher Education, Multiple Choice Tests, Computer Assisted Testing, Electronic Learning
Ute Mertens; Marlit A. Lindner – Journal of Computer Assisted Learning, 2025
Background: Educational assessments increasingly shift towards computer-based formats. Many studies have explored how different types of automated feedback affect learning. However, few studies have investigated how digital performance feedback affects test takers' ratings of affective-motivational reactions during a testing session. Method: In…
Descriptors: Educational Assessment, Computer Assisted Testing, Automation, Feedback (Response)
Esteban Guevara Hidalgo – International Journal for Educational Integrity, 2025
The COVID-19 pandemic had a profound impact on education, forcing many teachers and students who were not used to online education to adapt to an unanticipated reality by improvising new teaching and learning methods. Within the realm of virtual education, the evaluation methods underwent a transformation, with some assessments shifting towards…
Descriptors: Foreign Countries, Higher Education, COVID-19, Pandemics
Hai Li; Wanli Xing; Chenglu Li; Wangda Zhu; Simon Woodhead – Journal of Learning Analytics, 2025
Knowledge tracing (KT) is a method to evaluate a student's knowledge state (KS) based on their historical problem-solving records by predicting the next answer's binary correctness. Although widely applied to closed-ended questions, it lacks a detailed option tracing (OT) method for assessing multiple-choice questions (MCQs). This paper introduces…
Descriptors: Mathematics Tests, Multiple Choice Tests, Computer Assisted Testing, Problem Solving
Congning Ni; Bhashithe Abeysinghe; Juanita Hicks – International Electronic Journal of Elementary Education, 2025
The National Assessment of Educational Progress (NAEP), often referred to as The Nation's Report Card, offers a window into the state of U.S. K-12 education system. Since 2017, NAEP has transitioned to digital assessments, opening new research opportunities that were previously impossible. Process data tracks students' interactions with the…
Descriptors: Reaction Time, Multiple Choice Tests, Behavior Change, National Competency Tests
Stefan O'Grady – International Journal of Listening, 2025
Language assessment is increasingly computermediated. This development presents opportunities with new task formats and equally a need for renewed scrutiny of established conventions. Recent recommendations to increase integrated skills assessment in lecture comprehension tests is premised on empirical research that demonstrates enhanced construct…
Descriptors: Language Tests, Lecture Method, Listening Comprehension Tests, Multiple Choice Tests
Naveed Saif; Sadaqat Ali; Abner Rubin; Soliman Aljarboa; Nabil Sharaf Almalki; Mrim M. Alnfiai; Faheem Khan; Sajid Ullah Khan – Educational Technology & Society, 2025
In the swiftly evolving landscape of education, the fusion of Artificial Intelligence's ingenuity with the dynamic capabilities of chat-bot technology has ignited a transformative paradigm shift. This convergence is not merely a technological integration but a profound reshaping of the fundamental principles of pedagogy, fundamentally redefining…
Descriptors: Artificial Intelligence, Technology Uses in Education, Readiness, Technological Literacy
Abdullah Al Fraidan – International Journal of Distance Education Technologies, 2025
This study explores vocabulary assessment practices in Saudi Arabia's hybrid EFL ecosystem, leveraging platforms like Blackboard and Google Forms. The focus is on identifying prevalent test formats and evaluating their alignment with modern pedagogical goals. To classify vocabulary assessment formats in hybridized EFL contexts and recommend the…
Descriptors: Vocabulary Development, English (Second Language), Second Language Learning, Second Language Instruction
Qiao Wang; Ralph L. Rose; Ayaka Sugawara; Naho Orita – Vocabulary Learning and Instruction, 2025
VocQGen is an automated tool designed to generate multiple-choice cloze (MCC) questions for vocabulary assessment in second language learning contexts. It leverages several natural language processing (NLP) tools and OpenAI's GPT-4 model to produce MCC items quickly from user-specified word lists. To evaluate its effectiveness, we used the first…
Descriptors: Vocabulary Skills, Artificial Intelligence, Computer Software, Multiple Choice Tests