NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Does not meet standards1
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ersan, Ozge; Berry, Yufeng – Educational Measurement: Issues and Practice, 2023
The increasing use of computerization in the testing industry and the need for items potentially measuring higher-order skills have led educational measurement communities to develop technology-enhanced (TE) items and conduct validity studies on the use of TE items. Parallel to this goal, the purpose of this study was to collect validity evidence…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Elementary Secondary Education, Accountability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Anela Hrnjicic; Adis Alihodžic – International Electronic Journal of Mathematics Education, 2024
Understanding the concepts related to real function is essential in learning mathematics. To determine how students understand these concepts, it is necessary to have an appropriate measurement tool. In this paper, we have created a web application using 32 items from conceptual understanding of real functions (CURF) item bank. We conducted a…
Descriptors: Mathematical Concepts, College Freshmen, Foreign Countries, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Andrei Ludu; Maria Ludu; Teha Cooks – Journal of Computers in Mathematics and Science Teaching, 2025
This paper presents research activity on computer-based mathematics learning to study the effectiveness of open-source teaching computer platforms (Canvas) in computer-assisted instruction. We designed a set of multiple-choice online quizzes as a dynamical flow-chart of possible paths to follow while solving a difficult math problem on…
Descriptors: Teaching Methods, Computer Assisted Instruction, Mathematics Education, Engineering Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Laura Kuusemets; Kristin Parve; Kati Ain; Tiina Kraav – International Journal of Education in Mathematics, Science and Technology, 2024
Using multiple-choice questions as learning and assessment tools is standard at all levels of education. However, when discussing the positive and negative aspects of their use, the time and complexity involved in producing plausible distractor options emerge as a disadvantage that offsets the time savings in relation to feedback. The article…
Descriptors: Program Evaluation, Artificial Intelligence, Computer Assisted Testing, Man Machine Systems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hai Li; Wanli Xing; Chenglu Li; Wangda Zhu; Simon Woodhead – Journal of Learning Analytics, 2025
Knowledge tracing (KT) is a method to evaluate a student's knowledge state (KS) based on their historical problem-solving records by predicting the next answer's binary correctness. Although widely applied to closed-ended questions, it lacks a detailed option tracing (OT) method for assessing multiple-choice questions (MCQs). This paper introduces…
Descriptors: Mathematics Tests, Multiple Choice Tests, Computer Assisted Testing, Problem Solving
Peer reviewed Peer reviewed
Direct linkDirect link
Katrin Klingbeil; Fabian Rösken; Bärbel Barzel; Florian Schacht; Kaye Stacey; Vicki Steinle; Daniel Thurm – ZDM: Mathematics Education, 2024
Assessing students' (mis)conceptions is a challenging task for teachers as well as for researchers. While individual assessment, for example through interviews, can provide deep insights into students' thinking, this is very time-consuming and therefore not feasible for whole classes or even larger settings. For those settings, automatically…
Descriptors: Multiple Choice Tests, Formative Evaluation, Mathematics Tests, Misconceptions
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Congning Ni; Bhashithe Abeysinghe; Juanita Hicks – International Electronic Journal of Elementary Education, 2025
The National Assessment of Educational Progress (NAEP), often referred to as The Nation's Report Card, offers a window into the state of U.S. K-12 education system. Since 2017, NAEP has transitioned to digital assessments, opening new research opportunities that were previously impossible. Process data tracks students' interactions with the…
Descriptors: Reaction Time, Multiple Choice Tests, Behavior Change, National Competency Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kosh, Audra E. – Journal of Applied Testing Technology, 2021
In recent years, Automatic Item Generation (AIG) has increasingly shifted from theoretical research to operational implementation, a shift raising some unforeseen practical challenges. Specifically, generating high-quality answer choices presents several challenges such as ensuring that answer choices blend in nicely together for all possible item…
Descriptors: Test Items, Multiple Choice Tests, Decision Making, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Foster, Colin; Woodhead, Simon; Barton, Craig; Clark-Wilson, Alison – Educational Studies in Mathematics, 2022
In this paper, we analyse a large, opportunistic dataset of responses (N = 219,826) to online, diagnostic multiple-choice mathematics questions, provided by 6-16-year-old UK school mathematics students (N = 7302). For each response, students were invited to indicate on a 5-point Likert-type scale how confident they were that their response was…
Descriptors: Foreign Countries, Elementary School Students, Secondary School Students, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Viskotová, Lenka; Hampel, David – Mathematics Teaching Research Journal, 2022
Computer-aided assessment is an important tool that reduces the workload of teachers and increases the efficiency of their work. The multiple-choice test is considered to be one of the most common forms of computer-aided testing and its application for mid-term has indisputable advantages. For the purposes of a high-quality and responsible…
Descriptors: Undergraduate Students, Mathematics Tests, Computer Assisted Testing, Faculty Workload
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
Alaska Department of Education & Early Development, 2021
The Performance Evaluation for Alaska's Schools (PEAKS) assessment is administered annually statewide to students in grades 3 through 9 in ELA and mathematics. It provides students the opportunity to show their understanding of "Alaska's English Language Arts (ELA) and Mathematics Standards." The assessments provide information to…
Descriptors: Student Evaluation, Elementary School Students, Secondary School Students, Summative Evaluation
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Ting, Mu Yu – EURASIA Journal of Mathematics, Science & Technology Education, 2017
Using the capabilities of expert knowledge structures, the researcher prepared test questions on the university calculus topic of "finding the area by integration." The quiz is divided into two types of multiple choice items (one out of four and one out of many). After the calculus course was taught and tested, the results revealed that…
Descriptors: Calculus, Mathematics Instruction, College Mathematics, Multiple Choice Tests
Crabtree, Ashleigh R. – ProQuest LLC, 2016
The purpose of this research is to provide information about the psychometric properties of technology-enhanced (TE) items and the effects these items have on the content validity of an assessment. Specifically, this research investigated the impact that the inclusion of TE items has on the construct of a mathematics test, the technical properties…
Descriptors: Psychometrics, Computer Assisted Testing, Test Items, Test Format
Previous Page | Next Page »
Pages: 1  |  2  |  3