NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 230 results Save | Export
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Peer reviewed Peer reviewed
Direct linkDirect link
Archana Praveen Kumar; Ashalatha Nayak; Manjula Shenoy K.; Chaitanya; Kaustav Ghosh – International Journal of Artificial Intelligence in Education, 2024
Multiple Choice Questions (MCQs) are a popular assessment method because they enable automated evaluation, flexible administration and use with huge groups. Despite these benefits, the manual construction of MCQs is challenging, time-consuming and error-prone. This is because each MCQ is comprised of a question called the "stem", a…
Descriptors: Multiple Choice Tests, Test Construction, Test Items, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Ersan, Ozge; Berry, Yufeng – Educational Measurement: Issues and Practice, 2023
The increasing use of computerization in the testing industry and the need for items potentially measuring higher-order skills have led educational measurement communities to develop technology-enhanced (TE) items and conduct validity studies on the use of TE items. Parallel to this goal, the purpose of this study was to collect validity evidence…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Elementary Secondary Education, Accountability
Peer reviewed Peer reviewed
Direct linkDirect link
Yusuf Oc; Hela Hassen – Marketing Education Review, 2025
Driven by technological innovations, continuous digital expansion has transformed fundamentally the landscape of modern higher education, leading to discussions about evaluation techniques. The emergence of generative artificial intelligence raises questions about reliability and academic honesty regarding multiple-choice assessments in online…
Descriptors: Higher Education, Multiple Choice Tests, Computer Assisted Testing, Electronic Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Nico Willert; Jonathan Thiemann – Technology, Knowledge and Learning, 2024
Manual composition of tasks and exams is a challenging and time-consuming task. Especially when exams are taken remotely without the personal monitoring by examiners, most exams can easily lose their integrity with the use of previously done exercises or student communication. This research introduces an approach that incorporates the principles…
Descriptors: Tests, Examiners, Foreign Countries, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Anela Hrnjicic; Adis Alihodžic – International Electronic Journal of Mathematics Education, 2024
Understanding the concepts related to real function is essential in learning mathematics. To determine how students understand these concepts, it is necessary to have an appropriate measurement tool. In this paper, we have created a web application using 32 items from conceptual understanding of real functions (CURF) item bank. We conducted a…
Descriptors: Mathematical Concepts, College Freshmen, Foreign Countries, Computer Assisted Testing
Ben Seipel; Patrick C. Kennedy; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison – Journal of Learning Disabilities, 2023
As access to higher education increases, it is important to monitor students with special needs to facilitate the provision of appropriate resources and support. Although metrics such as the "reading readiness" ACT (formerly American College Testing) of provide insight into how many students may need such resources, they do not specify…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Reading Tests, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Plasencia, Javier – Biochemistry and Molecular Biology Education, 2023
Multiple studies have shown that testing contributes to learning at all educational levels. In this observational classroom study, we report the use of a learning tool developed for a Genetics and Molecular Biology course at the college level. An interactive set of practice exams that included 136 multiple choice questions (MCQ) or matching…
Descriptors: Molecular Biology, Genetics, Science Tests, College Science
Peer reviewed Peer reviewed
Direct linkDirect link
Richard Say; Denis Visentin; Annette Saunders; Iain Atherton; Andrea Carr; Carolyn King – Journal of Computer Assisted Learning, 2024
Background: Formative online multiple-choice tests are ubiquitous in higher education and potentially powerful learning tools. However, commonly used feedback approaches in online multiple-choice tests can discourage meaningful engagement and enable strategies, such as trial-and-error, that circumvent intended learning outcomes. These strategies…
Descriptors: Feedback (Response), Self Management, Formative Evaluation, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Falcão, Filipe; Costa, Patrício; Pêgo, José M. – Advances in Health Sciences Education, 2022
Background: Current demand for multiple-choice questions (MCQs) in medical assessment is greater than the supply. Consequently, an urgency for new item development methods arises. Automatic Item Generation (AIG) promises to overcome this burden, generating calibrated items based on the work of computer algorithms. Despite the promising scenario,…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Items, Medical Education
Peer reviewed Peer reviewed
Direct linkDirect link
Wood, Eileen; Klausz, Noah; MacNeil, Stephen – Innovative Higher Education, 2022
Learning gains associated with multiple-choice testing formats that provide immediate feedback (e.g., IFAT®) are often greater than those for typical single-choice delayed feedback formats (e.g. Scantron®). Immediate feedback formats also typically permit part marks unlike delayed feedback formats. The present study contrasted IFAT® with a new…
Descriptors: Academic Achievement, Computer Assisted Testing, Feedback (Response), Organic Chemistry
Peer reviewed Peer reviewed
Direct linkDirect link
Hubert Izienicki – Teaching Sociology, 2024
Many instructors use a syllabus quiz to ensure that students learn and understand the content of the syllabus. In this project, I move beyond this exercise's primary function and examine students' syllabus quiz scores to see if they can predict how well students perform in the course overall. Using data from 495 students enrolled in 18 sections of…
Descriptors: Tests, Course Descriptions, Performance, Predictor Variables
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Laura Kuusemets; Kristin Parve; Kati Ain; Tiina Kraav – International Journal of Education in Mathematics, Science and Technology, 2024
Using multiple-choice questions as learning and assessment tools is standard at all levels of education. However, when discussing the positive and negative aspects of their use, the time and complexity involved in producing plausible distractor options emerge as a disadvantage that offsets the time savings in relation to feedback. The article…
Descriptors: Program Evaluation, Artificial Intelligence, Computer Assisted Testing, Man Machine Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Ute Mertens; Marlit A. Lindner – Journal of Computer Assisted Learning, 2025
Background: Educational assessments increasingly shift towards computer-based formats. Many studies have explored how different types of automated feedback affect learning. However, few studies have investigated how digital performance feedback affects test takers' ratings of affective-motivational reactions during a testing session. Method: In…
Descriptors: Educational Assessment, Computer Assisted Testing, Automation, Feedback (Response)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  16