NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Christophe O. Soulage; Fabien Van Coppenolle; Fitsum Guebre-Egziabher – Advances in Physiology Education, 2024
Artificial intelligence (AI) has gained massive interest with the public release of the conversational AI "ChatGPT," but it also has become a matter of concern for academia as it can easily be misused. We performed a quantitative evaluation of the performance of ChatGPT on a medical physiology university examination. Forty-one answers…
Descriptors: Medical Students, Medical Education, Artificial Intelligence, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Tadesse Hagos; Dereje Andargie – Chemistry Education Research and Practice, 2024
This study examines how students' conceptual and procedural knowledge of chemical equilibrium is affected by technology-supported formative assessment (TSFA) strategies. This study's embedded/nested mixed method research design was used to achieve the study's objective. A random sampling method was used to choose the sample of two intact classes…
Descriptors: Chemistry, Formative Evaluation, Science Instruction, Scientific Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Olsho, Alexis; Smith, Trevor I.; Eaton, Philip; Zimmerman, Charlotte; Boudreaux, Andrew; White Brahmia, Suzanne – Physical Review Physics Education Research, 2023
We developed the Physics Inventory of Quantitative Literacy (PIQL) to assess students' quantitative reasoning in introductory physics contexts. The PIQL includes several "multiple-choice-multipleresponse" (MCMR) items (i.e., multiple-choice questions for which more than one response may be selected) as well as traditional single-response…
Descriptors: Multiple Choice Tests, Science Tests, Physics, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Scrimgeour, Meghan B.; Huang, Haigen H. – Mid-Western Educational Researcher, 2022
Given the growing trend toward using technology to assess student learning, this investigation examined test mode comparability of student achievement scores obtained from paper-pencil and computerized assessments of statewide End-of-Course and End-of-Grade examinations in the subject areas of high school biology and eighth-grade English Language…
Descriptors: Comparative Analysis, Test Format, Grade 8, English Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2021
Despite their drawback, multiple-choice questions are an enduring feature in instruction because they can be answered more rapidly than open response questions and they are easily scored. However, it can be difficult to generate good incorrect choices (called "distractors"). We designed an algorithm to generate distractors from a…
Descriptors: Semantics, Networks, Multiple Choice Tests, Teaching Methods
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing
Wang, Lu; Steedle, Jeffrey – ACT, Inc., 2020
In recent ACT mode comparability studies, students testing on laptop or desktop computers earned slightly higher scores on average than students who tested on paper, especially on the ACT® reading and English tests (Li et al., 2017). Equating procedures adjust for such "mode effects" to make ACT scores comparable regardless of testing…
Descriptors: Test Format, Reading Tests, Language Tests, English
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E. – Grantee Submission, 2018
We compared students' performance on a paper-based test (PBT) and three computer-based tests (CBTs). The three computer-based tests used different test navigation and answer selection features, allowing us to examine how these features affect student performance. The study sample consisted of 9,698 fourth through twelfth grade students from across…
Descriptors: Evaluation Methods, Tests, Computer Assisted Testing, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation
Wagemaker, Hans, Ed. – International Association for the Evaluation of Educational Achievement, 2020
Although International Association for the Evaluation of Educational Achievement-pioneered international large-scale assessment (ILSA) of education is now a well-established science, non-practitioners and many users often substantially misunderstand how large-scale assessments are conducted, what questions and challenges they are designed to…
Descriptors: International Assessment, Achievement Tests, Educational Assessment, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Chen-Yu; Wang, Tzu-Hua – EURASIA Journal of Mathematics, Science & Technology Education, 2017
This research explored how different models of Web-based dynamic assessment in remedial teaching improved junior high school student learning achievement and their misconceptions about the topic of "Weather and Climate." This research adopted a quasi-experimental design. A total of 58 7th graders participated in this research.…
Descriptors: Program Implementation, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Schuijers, Johannes A.; McDonald, Stuart J.; Julien, Brianna L.; Lexis, Louise A.; Thomas, Colleen J.; Chan, Siew; Samiric, T. – Advances in Physiology Education, 2013
Many conventional science courses contain subjects embedded with laboratory-based activities. However, research on the benefits of positioning the practicals within the theory subject or developing them distinctly from the theory is largely absent. This report compared results in a physiology theory subject among three different cohorts of…
Descriptors: Physiology, Teaching Methods, Theory Practice Relationship, Science Instruction
Lockheed, Marlaine E. – World Bank, 2015
The number of countries that regularly participate in international large-scale assessments has increased sharply over the past 15 years, with the share of countries participating in the Programme for International Student Assessment growing from one-fifth of countries in 2000 to over one-third of countries in 2015. What accounts for this…
Descriptors: Foreign Countries, Secondary School Students, Achievement Tests, International Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dana Kelly; Holly Xie; Christine Winquist Nord; Frank Jenkins; Jessica Ying Chan; David Kastberg – National Center for Education Statistics, 2013
The Program for International Student Assessment (PISA) is a system of international assessments that allows countries to compare outcomes of learning as students near the end of compulsory schooling. PISA core assessments measure the performance of 15-year-old students in mathematics, science, and reading literacy every 3 years. Coordinated by…
Descriptors: Student Evaluation, Achievement Tests, Secondary School Students, Academic Achievement
Strader, Douglas A. – ProQuest LLC, 2012
There are many advantages supporting the use of computers as an alternate mode of delivery for high stakes testing: cost savings, increased test security, flexibility in test administrations, innovations in items, and reduced scoring time. The purpose of this study was to determine if the use of computers as the mode of delivery had any…
Descriptors: Computer Assisted Testing, Evaluation Methods, Educational Technology, Scores
Previous Page | Next Page »
Pages: 1  |  2