NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 121 to 135 of 4,798 results Save | Export
Sebastian Moncaleano – ProQuest LLC, 2021
The growth of computer-based testing over the last two decades has motivated the creation of innovative item formats. It is often argued that technology-enhanced items (TEIs) provide better measurement of test-takers' knowledge, skills, and abilities by increasing the authenticity of tasks presented to test-takers (Sireci & Zenisky, 2006).…
Descriptors: Computer Assisted Testing, Test Format, Test Items, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Richard Say; Denis Visentin; Annette Saunders; Iain Atherton; Andrea Carr; Carolyn King – Journal of Computer Assisted Learning, 2024
Background: Formative online multiple-choice tests are ubiquitous in higher education and potentially powerful learning tools. However, commonly used feedback approaches in online multiple-choice tests can discourage meaningful engagement and enable strategies, such as trial-and-error, that circumvent intended learning outcomes. These strategies…
Descriptors: Feedback (Response), Self Management, Formative Evaluation, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gitit Kavé – International Journal of Language & Communication Disorders, 2024
Background: Vocabulary scores increase until approximately age 65 years and then remain stable or decrease slightly, unlike scores on tests of other cognitive abilities that decline significantly with age. Aims: To review the findings on ageing-related changes in vocabulary, and to discuss four methodological issues: research design; test type;…
Descriptors: Vocabulary Development, Aging (Individuals), Older Adults, Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Jaya Shukla; Ram Manohar Singh – Metacognition and Learning, 2024
Knowledge exploration refers to actively seeking information, ideas, and experiences, often beyond immediate task requirements. Previous research on exploratory behaviour has predominantly focused on visual and perceptual forms of exploration, overlooking the academic aspect, where the aim is to bridge knowledge gaps. To test the effect of task…
Descriptors: Feedback (Response), Self Esteem, Student Motivation, Student Behavior
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mehmet Kanik – International Journal of Assessment Tools in Education, 2024
ChatGPT has surged interest to cause people to look for its use in different tasks. However, before allowing it to replace humans, its capabilities should be investigated. As ChatGPT has potential for use in testing and assessment, this study aims to investigate the questions generated by ChatGPT by comparing them to those written by a course…
Descriptors: Artificial Intelligence, Testing, Multiple Choice Tests, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Anggi Liztya Qomara; Bea Hana Siswati; Bevo Wahono – Journal of Biological Education Indonesia (Jurnal Pendidikan Biologi Indonesia), 2024
The 21st century learning demanded students to master the 4C competencies: critical thinking and problem-solving, communication, collaboration, and creativity and innovation, with problem-solving skills significantly influencing students' cognitive learning outcomes. This study utilized the Flipped Classroom instructional model, assisted by the…
Descriptors: Flipped Classroom, Problem Solving, Learning Processes, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Philip Newton; Maira Xiromeriti – Assessment & Evaluation in Higher Education, 2024
Media coverage suggests that ChatGPT can pass examinations based on multiple choice questions (MCQs), including those used to qualify doctors, lawyers, scientists etc. This poses a potential risk to the integrity of those examinations. We reviewed current research evidence regarding the performance of ChatGPT on MCQ-based examinations in higher…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Integrity, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Larry Katz; Dave Carlgren; Cory Wright-Maley; Megan Hallam; Joan Forder; Danielle Milner; Lisa Finestone – Canadian Journal for the Scholarship of Teaching and Learning, 2024
Student-generated questions can be an effective study technique to improve active learning, metacognitive skills, and performance on examinations. Students have shown greater success when assessed using peer-made study questions than when studying without questions. In three semesters of a kinesiology research methods course students were taught…
Descriptors: Undergraduate Students, Kinesiology, Multiple Choice Tests, Student Developed Materials
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Arandha May Rachmawati; Agus Widyantoro – English Language Teaching Educational Journal, 2025
This study aims to evaluate the quality of English reading comprehension test instruments used in informal learning, especially as English literacy tests. With a quantitative approach, the analysis was carried out using the Rasch model through the Quest program on 30 multiple-choice questions given to 30 grade IX students from informal educational…
Descriptors: Item Response Theory, Reading Tests, Reading Comprehension, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Tobias Lieberei; Leroy Großmann; Virginia Deborah Elaine Welter; Dirk Krüger; Moritz Krell – Research in Science Education, 2025
The use of multiple-choice (MC) instruments to assess pedagogical content knowledge (PCK) has advantages in terms of test economy and objectivity, but it also poses challenges, for example, in terms of adequately capturing the intended construct. To help address these challenges, we developed and evaluated a new instrument to assess science…
Descriptors: Multiple Choice Tests, Pedagogical Content Knowledge, Science Teachers, Logical Thinking
Peer reviewed Peer reviewed
Direct linkDirect link
Chris R. Patterson – Educational Assessment, 2025
Culturally responsive (CR) assessments have three major goals: (a) increasing cultural representation in items, (b) fostering cultural competence and antiracist mind-sets in test takers, and (c) more accurately measuring all test takers' abilities. However, limited resources exist in how to create CR items or what CR items could look like.…
Descriptors: Test Items, Culture Fair Tests, Cultural Relevance, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Jonas Bley; Eva Rexigel; Alda Arias; Lars Krupp; Steffen Steinert; Nikolas Longen; Paul Lukowicz; Stefan Küchemann; Jochen Kuhn; Maximilian Kiefer-Emmanouilidis; Artur Widera – Physical Review Physics Education Research, 2025
In the rapidly evolving interdisciplinary field of quantum information science and technology, a major obstacle is the need to understand advanced mathematics to solve complex problems. Current findings in educational research suggest that incorporating visualizations into problem-solving settings can have beneficial effects on students'…
Descriptors: Problem Solving, Quantum Mechanics, Information Science Education, Interdisciplinary Approach
Peer reviewed Peer reviewed
Direct linkDirect link
Falcão, Filipe; Costa, Patrício; Pêgo, José M. – Advances in Health Sciences Education, 2022
Background: Current demand for multiple-choice questions (MCQs) in medical assessment is greater than the supply. Consequently, an urgency for new item development methods arises. Automatic Item Generation (AIG) promises to overcome this burden, generating calibrated items based on the work of computer algorithms. Despite the promising scenario,…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Items, Medical Education
Peer reviewed Peer reviewed
Direct linkDirect link
Wood, Eileen; Klausz, Noah; MacNeil, Stephen – Innovative Higher Education, 2022
Learning gains associated with multiple-choice testing formats that provide immediate feedback (e.g., IFAT®) are often greater than those for typical single-choice delayed feedback formats (e.g. Scantron®). Immediate feedback formats also typically permit part marks unlike delayed feedback formats. The present study contrasted IFAT® with a new…
Descriptors: Academic Achievement, Computer Assisted Testing, Feedback (Response), Organic Chemistry
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Siu, Wai-Lok; Huang, Xiaoting – Journal of Educational Measurement, 2022
Multiple-choice (MC) items are widely used in educational tests. Distractor analysis, an important procedure for checking the utility of response options within an MC item, can be readily implemented in the framework of item response theory (IRT). Although random guessing is a popular behavior of test-takers when answering MC items, none of the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Item Response Theory, Attention
Pages: 1  |  ...  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  ...  |  320