NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 55 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Mengxue; Heffernan, Neil; Lan, Andrew – International Educational Data Mining Society, 2023
Automated scoring of student responses to open-ended questions, including short-answer questions, has great potential to scale to a large number of responses. Recent approaches for automated scoring rely on supervised learning, i.e., training classifiers or fine-tuning language models on a small number of responses with human-provided score…
Descriptors: Scoring, Computer Assisted Testing, Mathematics Instruction, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Katrin Klingbeil; Fabian Rösken; Bärbel Barzel; Florian Schacht; Kaye Stacey; Vicki Steinle; Daniel Thurm – ZDM: Mathematics Education, 2024
Assessing students' (mis)conceptions is a challenging task for teachers as well as for researchers. While individual assessment, for example through interviews, can provide deep insights into students' thinking, this is very time-consuming and therefore not feasible for whole classes or even larger settings. For those settings, automatically…
Descriptors: Multiple Choice Tests, Formative Evaluation, Mathematics Tests, Misconceptions
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yasuyuki Nakamura – International Association for Development of the Information Society, 2023
STACK is an online testing system that can automatically assess mathematical formulae. When working with STACK on a smartphone, inputting mathematical formulae is time-consuming; therefore, to solve this problem a mathematical formula input interface for smartphones has been developed based on the flick operation. However, since the time of…
Descriptors: Usability, Mathematics Instruction, Mathematical Formulas, Telecommunications
Peer reviewed Peer reviewed
Direct linkDirect link
Jeff Ford; Rachel Erickson; Ha Le; Kaylee Vick; Jillian Downey – PRIMUS, 2024
In this study, we analyzed student participation and success in a college-level Calculus I course that utilized standards-based grading. By measuring the level to which students participate in this class structure, we were able to use a clustering algorithm that revealed multiple groupings of students that were distinct based on activity…
Descriptors: Calculus, Mathematics Instruction, Mathematics Achievement, Grades (Scholastic)
Marini, Jessica P.; Westrick, Paul A.; Young, Linda; Shaw, Emily J. – College Board, 2022
This study examines relationships between digital SAT scores and other relevant educational measures, such as high school grade point average (HSGPA), PSAT/NMSQT Total score, and Average AP Exam score, and compares those relationships to current paper and pencil SAT score relationships with the same measures. This information can provide…
Descriptors: Scores, College Entrance Examinations, Comparative Analysis, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Thompson, Virginia L.; Wallach, Patrick – International Journal of Education in Mathematics, Science and Technology, 2023
This paper presents a case study conducted by two universities seeking to explore Open Education Resources (OER) in their precalculus course. Students not only gained access to their textbook for free on the first day of class, but also Lumen OHM, an online mathematics assessment platform. The majority of students involved in the study were…
Descriptors: Case Studies, Open Educational Resources, Calculus, Textbooks
Peer reviewed Peer reviewed
Direct linkDirect link
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Wenyangzi Shi; Zohreh Shahbazi – Canadian Journal of Science, Mathematics and Technology Education, 2024
The COVID-19 pandemic has brought the rapid transition to online quantitative education, leading to unique challenges and opportunities in assessments for both students and instructors. Focusing on undergraduate students and teaching staff at a Canadian university, this study investigates and compares their experiences and perspectives regarding…
Descriptors: Statistics Education, Online Courses, COVID-19, Pandemics
Rogers, Angela – Mathematics Education Research Group of Australasia, 2021
Test developers are continually exploring the possibilities Computer Based Assessment (CBA) offers the Mathematics domain. This paper describes the trial of the Place Value Assessment Tool (PVAT) and its online equivalent, the PVAT-O. Both tests were administered using a counterbalanced research design to 253 Year 3-6 students across nine classes…
Descriptors: Mathematics Tests, Computer Assisted Testing, Number Concepts, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Scrimgeour, Meghan B.; Huang, Haigen H. – Mid-Western Educational Researcher, 2022
Given the growing trend toward using technology to assess student learning, this investigation examined test mode comparability of student achievement scores obtained from paper-pencil and computerized assessments of statewide End-of-Course and End-of-Grade examinations in the subject areas of high school biology and eighth-grade English Language…
Descriptors: Comparative Analysis, Test Format, Grade 8, English Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Viskotová, Lenka; Hampel, David – Mathematics Teaching Research Journal, 2022
Computer-aided assessment is an important tool that reduces the workload of teachers and increases the efficiency of their work. The multiple-choice test is considered to be one of the most common forms of computer-aided testing and its application for mid-term has indisputable advantages. For the purposes of a high-quality and responsible…
Descriptors: Undergraduate Students, Mathematics Tests, Computer Assisted Testing, Faculty Workload
Peer reviewed Peer reviewed
Direct linkDirect link
Aspiranti, Kathleen B.; Ebner, Sara; Reynolds, Jennifer L.; Henze, Erin E. C. – Journal of Education for Students Placed at Risk, 2022
There is a lack of research examining the use of curriculum-based measurements (CBMs) with special populations, particularly English Language Learners (ELLs). The current study used an alternating treatments single-case design with five Latinx ELL students to examine scores across three math fluency CBM modalities. One-minute probes using either…
Descriptors: Comparative Analysis, Mathematics Instruction, Curriculum Based Assessment, English Language Learners
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Costa, Denise Reis; Chen, Chia-Wen – Large-scale Assessments in Education, 2023
Given the ongoing development of computer-based tasks, there has been increasing interest in modelling students' behaviour indicators from log file data with contextual variables collected via questionnaires. In this work, we apply a latent regression model to analyse the relationship between latent constructs (i.e., performance, speed, and…
Descriptors: Achievement Tests, Secondary School Students, International Assessment, Foreign Countries
Wang, Lu; Steedle, Jeffrey – ACT, Inc., 2020
In recent ACT mode comparability studies, students testing on laptop or desktop computers earned slightly higher scores on average than students who tested on paper, especially on the ACT® reading and English tests (Li et al., 2017). Equating procedures adjust for such "mode effects" to make ACT scores comparable regardless of testing…
Descriptors: Test Format, Reading Tests, Language Tests, English
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4