NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 30 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Emery-Wetherell, Meaghan; Wang, Ruoyao – Assessment & Evaluation in Higher Education, 2023
Over four semesters of a large introductory statistics course the authors found students were engaging in contract cheating on Chegg.com during multiple choice examinations. In this paper we describe our methodology for identifying, addressing and eventually eliminating cheating. We successfully identified 23 out of 25 students using a combination…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Cheating, Identification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2021
Despite their drawback, multiple-choice questions are an enduring feature in instruction because they can be answered more rapidly than open response questions and they are easily scored. However, it can be difficult to generate good incorrect choices (called "distractors"). We designed an algorithm to generate distractors from a…
Descriptors: Semantics, Networks, Multiple Choice Tests, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom; Crisp, Victoria – Assessment in Education: Principles, Policy & Practice, 2019
For many years, question choice has been used in some UK public examinations, with students free to choose which questions they answer from a selection (within certain parameters). There has been little published research on choice of exam questions in recent years in the UK. In this article we distinguish different scenarios in which choice…
Descriptors: Test Items, Test Construction, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Babo, Rosalina; Babo, Lurdes; Suhonen, Jarkko; Tukiainen, Markku – Journal of Information Technology Education: Innovations in Practice, 2020
Aim/Purpose: The aim of this study is to understand student's opinions and perceptions about e-assessment when the assessment process was changed from the traditional computer assisted method to a multiple-choice Moodle based method. Background: In order to implement continuous assessment to a large number of students, several shifts are…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Student Attitudes, Student Experience
Susanti, Yuni; Tokunaga, Takenobu; Nishikawa, Hitoshi; Obari, Hiroyuki – Research and Practice in Technology Enhanced Learning, 2017
The present study investigates the best factor for controlling the item difficulty of multiple-choice English vocabulary questions generated by an automatic question generation system. Three factors are considered for controlling item difficulty: (1) reading passage difficulty, (2) semantic similarity between the correct answer and distractors,…
Descriptors: Test Items, Difficulty Level, Computer Assisted Testing, Vocabulary Development
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eckerly, Carol; Smith, Russell; Sowles, John – Practical Assessment, Research & Evaluation, 2018
The Discrete Option Multiple Choice (DOMC) item format was introduced by Foster and Miller (2009) with the intent of improving the security of test content. However, by changing the amount and order of the content presented, the test taking experience varies by test taker, thereby introducing potential fairness issues. In this paper we…
Descriptors: Culture Fair Tests, Multiple Choice Tests, Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Relkin, Emily; de Ruiter, Laura; Bers, Marina Umaschi – Journal of Science Education and Technology, 2020
There is a need for developmentally appropriate Computational Thinking (CT) assessments that can be implemented in early childhood classrooms. We developed a new instrument called "TechCheck" for assessing CT skills in young children that does not require prior knowledge of computer programming. "TechCheck" is based on…
Descriptors: Developmentally Appropriate Practices, Computation, Thinking Skills, Early Childhood Education
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Senyung; Shin, Sun-Young – Language Assessment Quarterly, 2021
Multiple test tasks are available for assessing L2 collocation knowledge. However, few studies have investigated the characteristics of a variety of recognition and recall tasks of collocation simultaneously, and most research on L2 collocations has focused on verb-noun and adjective-noun collocations. This study investigates (1) the relative…
Descriptors: Phrase Structure, Second Language Learning, Language Tests, Recall (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Aldabe, Itziar; Maritxalar, Montse – IEEE Transactions on Learning Technologies, 2014
The work we present in this paper aims to help teachers create multiple-choice science tests. We focus on a scientific vocabulary-learning scenario taking place in a Basque-language educational environment. In this particular scenario, we explore the option of automatically generating Multiple-Choice Questions (MCQ) by means of Natural Language…
Descriptors: Science Tests, Test Construction, Computer Assisted Testing, Multiple Choice Tests
Sas, Ioan Ciprian – ProQuest LLC, 2010
This research attempted to bridge the gap between cognitive psychology and educational measurement (Mislevy, 2008; Leighton & Gierl, 2007; Nichols, 1994; Messick, 1989; Snow & Lohman, 1989) by using cognitive theories from working memory (Baddeley, 1986; Miyake & Shah, 1999; Grimley & Banner, 2008), multimedia learning (Mayer, 2001), and cognitive…
Descriptors: Multiple Choice Tests, Concept Mapping, Computer Assisted Testing, Time on Task
Peer reviewed Peer reviewed
Direct linkDirect link
Lingard, Jennifer; Minasian-Batmanian, Laura; Vella, Gilbert; Cathers, Ian; Gonzalez, Carlos – Assessment & Evaluation in Higher Education, 2009
Effective criterion referenced assessment requires grade descriptors to clarify to students what skills are required to gain higher grades. But do students and staff actually have the same perception of the grading system, and if so, do they perform better than those whose perceptions are less accurately aligned with those of staff? Since…
Descriptors: Feedback (Response), Prior Learning, Physics, Difficulty Level
Clariana, Roy B. – 1990
Research has shown that multiple-choice questions formed by transforming or paraphrasing a reading passage provide a measure of student comprehension. It is argued that similar transformation and paraphrasing of lesson questions is an appropriate way to form parallel multiple-choice items to be used as a posttest measure of student comprehension.…
Descriptors: Comprehension, Computer Assisted Testing, Difficulty Level, Measurement Techniques
Previous Page | Next Page ยป
Pages: 1  |  2