NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20012
What Works Clearinghouse Rating
Does not meet standards1
Showing 16 to 30 of 362 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Plasencia, Javier – Biochemistry and Molecular Biology Education, 2023
Multiple studies have shown that testing contributes to learning at all educational levels. In this observational classroom study, we report the use of a learning tool developed for a Genetics and Molecular Biology course at the college level. An interactive set of practice exams that included 136 multiple choice questions (MCQ) or matching…
Descriptors: Molecular Biology, Genetics, Science Tests, College Science
Sebastian Moncaleano – ProQuest LLC, 2021
The growth of computer-based testing over the last two decades has motivated the creation of innovative item formats. It is often argued that technology-enhanced items (TEIs) provide better measurement of test-takers' knowledge, skills, and abilities by increasing the authenticity of tasks presented to test-takers (Sireci & Zenisky, 2006).…
Descriptors: Computer Assisted Testing, Test Format, Test Items, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Richard Say; Denis Visentin; Annette Saunders; Iain Atherton; Andrea Carr; Carolyn King – Journal of Computer Assisted Learning, 2024
Background: Formative online multiple-choice tests are ubiquitous in higher education and potentially powerful learning tools. However, commonly used feedback approaches in online multiple-choice tests can discourage meaningful engagement and enable strategies, such as trial-and-error, that circumvent intended learning outcomes. These strategies…
Descriptors: Feedback (Response), Self Management, Formative Evaluation, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Falcão, Filipe; Costa, Patrício; Pêgo, José M. – Advances in Health Sciences Education, 2022
Background: Current demand for multiple-choice questions (MCQs) in medical assessment is greater than the supply. Consequently, an urgency for new item development methods arises. Automatic Item Generation (AIG) promises to overcome this burden, generating calibrated items based on the work of computer algorithms. Despite the promising scenario,…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Items, Medical Education
Peer reviewed Peer reviewed
Direct linkDirect link
Wood, Eileen; Klausz, Noah; MacNeil, Stephen – Innovative Higher Education, 2022
Learning gains associated with multiple-choice testing formats that provide immediate feedback (e.g., IFAT®) are often greater than those for typical single-choice delayed feedback formats (e.g. Scantron®). Immediate feedback formats also typically permit part marks unlike delayed feedback formats. The present study contrasted IFAT® with a new…
Descriptors: Academic Achievement, Computer Assisted Testing, Feedback (Response), Organic Chemistry
Peer reviewed Peer reviewed
Direct linkDirect link
Hubert Izienicki – Teaching Sociology, 2024
Many instructors use a syllabus quiz to ensure that students learn and understand the content of the syllabus. In this project, I move beyond this exercise's primary function and examine students' syllabus quiz scores to see if they can predict how well students perform in the course overall. Using data from 495 students enrolled in 18 sections of…
Descriptors: Tests, Course Descriptions, Performance, Predictor Variables
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Laura Kuusemets; Kristin Parve; Kati Ain; Tiina Kraav – International Journal of Education in Mathematics, Science and Technology, 2024
Using multiple-choice questions as learning and assessment tools is standard at all levels of education. However, when discussing the positive and negative aspects of their use, the time and complexity involved in producing plausible distractor options emerge as a disadvantage that offsets the time savings in relation to feedback. The article…
Descriptors: Program Evaluation, Artificial Intelligence, Computer Assisted Testing, Man Machine Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Ute Mertens; Marlit A. Lindner – Journal of Computer Assisted Learning, 2025
Background: Educational assessments increasingly shift towards computer-based formats. Many studies have explored how different types of automated feedback affect learning. However, few studies have investigated how digital performance feedback affects test takers' ratings of affective-motivational reactions during a testing session. Method: In…
Descriptors: Educational Assessment, Computer Assisted Testing, Automation, Feedback (Response)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yilmaz, Erdi Okan; Toker, Türker – International Journal of Psychology and Educational Studies, 2022
This study examines the online assessment-evaluation activities in distance education processes. The effects of different online exam application styles considering the online assessment-evaluation in distance education processes, including all programs of a higher education institution, were documented. The population for online…
Descriptors: Foreign Countries, Computer Assisted Testing, Test Format, Distance Education
Peer reviewed Peer reviewed
Direct linkDirect link
Esteban Guevara Hidalgo – International Journal for Educational Integrity, 2025
The COVID-19 pandemic had a profound impact on education, forcing many teachers and students who were not used to online education to adapt to an unanticipated reality by improvising new teaching and learning methods. Within the realm of virtual education, the evaluation methods underwent a transformation, with some assessments shifting towards…
Descriptors: Foreign Countries, Higher Education, COVID-19, Pandemics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eva Svärdemo Åberg; Eva Edman Stålbrandt; Anna Wiik – Designs for Learning, 2025
This study explores how digital assessment designs in higher education influence students' ability to demonstrate their knowledge and realize their agency, which is a significant gap in the research literature. The increasing integration of digital tools, as well as heightened concerns about academic integrity, have led to a shift towards more…
Descriptors: Higher Education, Computer Assisted Testing, Design, Student Evaluation
Yu Wang – ProQuest LLC, 2024
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Cognitive Tests, Cognitive Measurement, Educational Diagnosis
Peer reviewed Peer reviewed
Direct linkDirect link
Máñez, Ignacio; Vidal-Abarca, Eduardo; Magliano, Joseph P. – Electronic Journal of Research in Educational Psychology, 2022
Introduction: Students often answer questions from available expository texts for assessment and learning purposes. These activities require readers to activate not only meaning-making processes (e.g., paraphrases or elaborations), but also metacognitive operations (e.g., monitoring readers' own comprehension or self-regulating reading behaviors)…
Descriptors: Protocol Analysis, Metacognition, Reading Comprehension, Grade 8
Ben Seipel; Patrick C. Kennedy; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison – Grantee Submission, 2022
As access to higher education increases, it is important to monitor students with special needs to facilitate the provision of appropriate resources and support. Although metrics such as ACT's (formerly American College Testing) "reading readiness" provide insight into how many students may need such resources, they do not specify…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Reading Tests, Reading Comprehension
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hai Li; Wanli Xing; Chenglu Li; Wangda Zhu; Simon Woodhead – Journal of Learning Analytics, 2025
Knowledge tracing (KT) is a method to evaluate a student's knowledge state (KS) based on their historical problem-solving records by predicting the next answer's binary correctness. Although widely applied to closed-ended questions, it lacks a detailed option tracing (OT) method for assessing multiple-choice questions (MCQs). This paper introduces…
Descriptors: Mathematics Tests, Multiple Choice Tests, Computer Assisted Testing, Problem Solving
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  25