NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 22 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Achmad Rante Suparman; Eli Rohaeti; Sri Wening – Journal on Efficiency and Responsibility in Education and Science, 2024
This study focuses on developing a five-tier chemical diagnostic test based on a computer-based test with 11 assessment categories with an assessment score from 0 to 10. A total of 20 items produced were validated by education experts, material experts, measurement experts, and media experts, and an average index of the Aiken test > 0.70 was…
Descriptors: Chemistry, Diagnostic Tests, Computer Assisted Testing, Credits
Peer reviewed Peer reviewed
Direct linkDirect link
Christophe O. Soulage; Fabien Van Coppenolle; Fitsum Guebre-Egziabher – Advances in Physiology Education, 2024
Artificial intelligence (AI) has gained massive interest with the public release of the conversational AI "ChatGPT," but it also has become a matter of concern for academia as it can easily be misused. We performed a quantitative evaluation of the performance of ChatGPT on a medical physiology university examination. Forty-one answers…
Descriptors: Medical Students, Medical Education, Artificial Intelligence, Computer Software
Crisp, Victoria; Shaw, Stuart – Research Matters, 2020
For assessment contexts where both a paper-based test and an on-screen assessment are available as alternatives, it is still common for the paper-based test to be prepared first with questions later transferred into an on-screen testing platform. One challenge with this is that some questions cannot be transferred. One solution might be for…
Descriptors: Computer Assisted Testing, Test Items, Test Construction, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2021
Despite their drawback, multiple-choice questions are an enduring feature in instruction because they can be answered more rapidly than open response questions and they are easily scored. However, it can be difficult to generate good incorrect choices (called "distractors"). We designed an algorithm to generate distractors from a…
Descriptors: Semantics, Networks, Multiple Choice Tests, Teaching Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Supriyati, Yetti; Iriyadi, Deni; Falani, Ilham – Journal of Technology and Science Education, 2021
This study aims to develop a score equating application for computer-based school exams using parallel test kits with 25% anchor items. The items are arranged according to HOTS (High Order Thinking Skill) category, and use a scientific approach according to the physics lessons characteristics. Therefore, the questions were made using stimulus,…
Descriptors: Physics, Science Instruction, Teaching Methods, Equated Scores
Durán, Richard P.; Zhang, Ting; Sañosa, David; Stancavage, Fran – American Institutes for Research, 2020
The National Assessment of Educational Progress's (NAEP's) transition to an entirely digitally based assessment (DBA) began in 2017. As part of this transition, new types of NAEP items have begun to be developed that leverage the DBA environment to measure a wider range of knowledge and skills. These new item types include the science…
Descriptors: National Competency Tests, Computer Assisted Testing, Science Tests, Test Items
Nebraska Department of Education, 2020
The Spring 2020 Nebraska Student-Centered Assessment System (NSCAS) General Summative testing was cancelled due to COVID-19. This technical report documents the processes and procedures that had been implemented to support the Spring 2020 assessments prior to the cancellation. The following sections are presented in this technical report: (1)…
Descriptors: English, Language Arts, Mathematics Tests, Science Tests
Nebraska Department of Education, 2019
This technical report documents the processes and procedures implemented to support the Spring 2019 Nebraska Student-Centered Assessment System (NSCAS) General Summative English Language Arts (ELA), Mathematics, and Science assessments by NWEA® under the supervision of the Nebraska Department of Education (NDE). The technical report shows how the…
Descriptors: English, Language Arts, Summative Evaluation, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sya'bandari, Yustika; Firman, Harry; Rusyat, Lilit – Journal of Science Learning, 2017
An efficient way to improve the quality of education in critical thinking is developing the better tests. The test has been shifted towards the use of computer-based procedures. Lack of specific topic of critical thinking tests in science and the advanced of technology made the researcher intended to develop and validate a test to measure…
Descriptors: Grade 7, Critical Thinking, Scientific Concepts, Science Instruction
Wagemaker, Hans, Ed. – International Association for the Evaluation of Educational Achievement, 2020
Although International Association for the Evaluation of Educational Achievement-pioneered international large-scale assessment (ILSA) of education is now a well-established science, non-practitioners and many users often substantially misunderstand how large-scale assessments are conducted, what questions and challenges they are designed to…
Descriptors: International Assessment, Achievement Tests, Educational Assessment, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Scherer, Ronny; Meßinger-Koppelt, Jenny; Tiemann, Rüdiger – International Journal of STEM Education, 2014
Background: Complex problem-solving competence is regarded as a key construct in science education. But due to the necessity of using interactive and intransparent assessment procedures, appropriate measures of the construct are rare. This paper consequently presents the development and validation of a computer-based problem-solving environment,…
Descriptors: Computer Assisted Testing, Problem Solving, Chemistry, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Aldabe, Itziar; Maritxalar, Montse – IEEE Transactions on Learning Technologies, 2014
The work we present in this paper aims to help teachers create multiple-choice science tests. We focus on a scientific vocabulary-learning scenario taking place in a Basque-language educational environment. In this particular scenario, we explore the option of automatically generating Multiple-Choice Questions (MCQ) by means of Natural Language…
Descriptors: Science Tests, Test Construction, Computer Assisted Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schultz, Madeleine – Journal of Learning Design, 2011
This paper reports on the development of a tool that generates randomised, non-multiple choice assessment within the BlackBoard Learning Management System interface. An accepted weakness of multiple-choice assessment is that it cannot elicit learning outcomes from upper levels of Biggs' SOLO taxonomy. However, written assessment items require…
Descriptors: Foreign Countries, Feedback (Response), Student Evaluation, Large Group Instruction
Mislevy, Robert J. – Center for Research on Evaluation Standards and Student Testing CRESST, 2004
In this paper we provide a rationale and approach for articulating a conceptual framework and corresponding development resources to guide the design of science inquiry assessments. Important here is attention to how and why research on cognition and learning, advances in technological capability, and development of sophisticated methods and…
Descriptors: Science, Test Construction, Student Evaluation, Science Tests
Previous Page | Next Page »
Pages: 1  |  2