NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Po-Chun Huang; Ying-Hong Chan; Ching-Yu Yang; Hung-Yuan Chen; Yao-Chung Fan – IEEE Transactions on Learning Technologies, 2024
Question generation (QG) task plays a crucial role in adaptive learning. While significant QG performance advancements are reported, the existing QG studies are still far from practical usage. One point that needs strengthening is to consider the generation of question group, which remains untouched. For forming a question group, intrafactors…
Descriptors: Automation, Test Items, Computer Assisted Testing, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Semere Kiros Bitew; Amir Hadifar; Lucas Sterckx; Johannes Deleu; Chris Develder; Thomas Demeester – IEEE Transactions on Learning Technologies, 2024
Multiple-choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, owing to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Haug, Tobias; Mann, Wolfgang; Holzknecht, Franz – Sign Language Studies, 2023
This study is a follow-up to previous research conducted in 2012 on computer-assisted language testing (CALT) that applied a survey approach to investigate the use of technology in sign language testing worldwide. The goal of the current study was to replicate the 2012 study and to obtain updated information on the use of technology in sign…
Descriptors: Computer Assisted Testing, Sign Language, Natural Language Processing, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Naveed Saif; Sadaqat Ali; Abner Rubin; Soliman Aljarboa; Nabil Sharaf Almalki; Mrim M. Alnfiai; Faheem Khan; Sajid Ullah Khan – Educational Technology & Society, 2025
In the swiftly evolving landscape of education, the fusion of Artificial Intelligence's ingenuity with the dynamic capabilities of chat-bot technology has ignited a transformative paradigm shift. This convergence is not merely a technological integration but a profound reshaping of the fundamental principles of pedagogy, fundamentally redefining…
Descriptors: Artificial Intelligence, Technology Uses in Education, Readiness, Technological Literacy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rotou, Ourania; Rupp, AndrĂ© A. – ETS Research Report Series, 2020
This research report provides a description of the processes of evaluating the "deployability" of automated scoring (AS) systems from the perspective of large-scale educational assessments in operational settings. It discusses a comprehensive psychometric evaluation that entails analyses that take into consideration the specific purpose…
Descriptors: Computer Assisted Testing, Scoring, Educational Assessment, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zesch, Torsten; Horbach, Andrea; Melanie Goggin, Melanie; Wrede-Jackes, Jennifer – Research-publishing.net, 2018
We present a tool for the creation and curation of C-tests. C-tests are an established tool in language proficiency testing and language learning. They require examinees to complete a text in which the second half of every second word is replaced by a gap. We support teachers and test designers in creating such tests through a web-based system…
Descriptors: Language Tests, Language Proficiency, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Aldabe, Itziar; Maritxalar, Montse – IEEE Transactions on Learning Technologies, 2014
The work we present in this paper aims to help teachers create multiple-choice science tests. We focus on a scientific vocabulary-learning scenario taking place in a Basque-language educational environment. In this particular scenario, we explore the option of automatically generating Multiple-Choice Questions (MCQ) by means of Natural Language…
Descriptors: Science Tests, Test Construction, Computer Assisted Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos – Interactive Learning Environments, 2011
The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…
Descriptors: Semantics, Natural Language Processing, Test Construction, Educational Technology
Liu, Chao-Lin; Lin, Jen-Hsiang; Wang, Yu-Chun – Online Submission, 2010
The authors report an implemented environment for computer-assisted authoring of test items and provide a brief discussion about the applications of NLP techniques for computer assisted language learning. Test items can serve as a tool for language learners to examine their competence in the target language. The authors apply techniques for…
Descriptors: Cloze Procedure, Listening Comprehension, Test Items, Foreign Countries
Burstein, Jill C.; Kaplan, Randy M. – 1995
There is a considerable interest at Educational Testing Service (ETS) to include performance-based, natural language constructed-response items on standardized tests. Such items can be developed, but the projected time and costs required to have these items scored by human graders would be prohibitive. In order for ETS to include these types of…
Descriptors: Computer Assisted Testing, Constructed Response, Cost Effectiveness, Hypothesis Testing