NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 99 results Save | Export
Thompson, Kathryn N. – ProQuest LLC, 2023
It is imperative to collect validity evidence prior to interpreting and using test scores. During the process of collecting validity evidence, test developers should consider whether test scores are contaminated by sources of extraneous information. This is referred to as construct irrelevant variance, or the "degree to which test scores are…
Descriptors: Test Wiseness, Test Items, Item Response Theory, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Thayaamol Upapong; Apantee Poonputta – Educational Process: International Journal, 2025
Background/purpose: The purposes of this research are to develop a reliable and valid assessment tool for measuring systems thinking skills in upper primary students in Thailand and to establish a normative criterion for evaluating their systems thinking abilities based on educational standards. Materials/methods: The study followed a three-phase…
Descriptors: Thinking Skills, Elementary School Students, Measures (Individuals), Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kevser Arslan; Asli Görgülü Ari – Shanlax International Journal of Education, 2024
This study aimed to develop a valid and reliable multiple-choice achievement test for the subject area of ecology. The study was conducted within the framework of exploratory sequential design based on mixed research methods, and the study group consisted of a total of 250 middle school students studying at the sixth and seventh grade level. In…
Descriptors: Ecology, Science Tests, Test Construction, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
DeCarlo, Lawrence T. – Journal of Educational Measurement, 2023
A conceptualization of multiple-choice exams in terms of signal detection theory (SDT) leads to simple measures of item difficulty and item discrimination that are closely related to, but also distinct from, those used in classical item analysis (CIA). The theory defines a "true split," depending on whether or not examinees know an item,…
Descriptors: Multiple Choice Tests, Test Items, Item Analysis, Test Wiseness
Alicia A. Stoltenberg – ProQuest LLC, 2024
Multiple-select multiple-choice items, or multiple-choice items with more than one correct answer, are used to quickly assess content on standardized assessments. Because there are multiple keys to these item types, there are also multiple ways to score student responses to these items. The purpose of this study was to investigate how changing the…
Descriptors: Scoring, Evaluation Methods, Multiple Choice Tests, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Büsra Kilinç; Mehmet Diyaddin Yasar – Science Insights Education Frontiers, 2024
In this study, it was aimed to develop an achievement test taking into account the subject acquisitions of the sound and properties unit in the sixth-grade science course. In the test development phase, firstly, literature review for the study was conducted. Then, 30 multiple choice questions in align with the subject acquisition in the 2018…
Descriptors: Science Tests, Test Construction, Grade 6, Science Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Emery-Wetherell, Meaghan; Wang, Ruoyao – Assessment & Evaluation in Higher Education, 2023
Over four semesters of a large introductory statistics course the authors found students were engaging in contract cheating on Chegg.com during multiple choice examinations. In this paper we describe our methodology for identifying, addressing and eventually eliminating cheating. We successfully identified 23 out of 25 students using a combination…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Cheating, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Ugray, Zsolt; Dunn, Brian K. – Journal of Information Systems Education, 2022
As Information Systems courses have become both more data-focused and student numbers have increased, there has emerged a greater need to assess technical and analytical skills more efficiently and effectively. Multiple-choice examinations provide a means for accomplishing this, though creating effective multiple-choice assessment items within a…
Descriptors: Quality Assurance, Information Systems, Computer Science Education, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Musa Adekunle Ayanwale – Discover Education, 2023
Examination scores obtained by students from the West African Examinations Council (WAEC), and National Business and Technical Examinations Board (NABTEB) may not be directly comparable due to differences in examination administration, item characteristics of the subject in question, and student abilities. For more accurate comparisons, scores…
Descriptors: Equated Scores, Mathematics Tests, Test Items, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dina Kamber Hamzic; Mirsad Trumic; Ismar Hadžalic – International Electronic Journal of Mathematics Education, 2025
Trigonometry is an important part of secondary school mathematics, but it is usually challenging for students to understand and learn. Since trigonometry is learned and used at a university level in many fields, like physics or geodesy, it is important to have an insight into students' trigonometry knowledge before the beginning of the university…
Descriptors: Trigonometry, Mathematics Instruction, Prior Learning, Outcomes of Education
Peer reviewed Peer reviewed
Direct linkDirect link
Slepkov, A. D.; Van Bussel, M. L.; Fitze, K. M.; Burr, W. S. – SAGE Open, 2021
There is a broad literature in multiple-choice test development, both in terms of item-writing guidelines, and psychometric functionality as a measurement tool. However, most of the published literature concerns multiple-choice testing in the context of expert-designed high-stakes standardized assessments, with little attention being paid to the…
Descriptors: Foreign Countries, Undergraduate Students, Student Evaluation, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Subali, Bambang; Kumaidi; Aminah, Nonoh Siti – International Journal of Instruction, 2021
This research aims at comparing item characteristics of instruments for assessing the level of mastery in scientific method for elementary students as they were analyzed using Classical Test Theory (CTT) and Item Response Theory (IRT). The two analyses are usually done separately, for difference object, in this moment it was analyzed…
Descriptors: Test Items, Item Response Theory, Item Analysis, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Forthmann, Boris; Förster, Natalie; Schütze, Birgit; Hebbecker, Karin; Flessner, Janis; Peters, Martin T.; Souvignier, Elmar – Journal of Intelligence, 2020
Distractors might display discriminatory power with respect to the construct of interest (e.g., intelligence), which was shown in recent applications of nested logit models to the short-form of Raven's progressive matrices and other reasoning tests. In this vein, a simulation study was carried out to examine two effect size measures (i.e., a…
Descriptors: Test Items, Item Analysis, Multiple Choice Tests, Intelligence Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mahroof, Ameema; Saeed, Muhammad – Bulletin of Education and Research, 2021
This small scale study aims to analyze the question papers of Board of Intermediate and Secondary Education in the subject of computer science with reference to item analysis and Bloom's taxonomy. Data were collected from 100 students of Grade 9th and 10th from the schools of Lahore city using convenient sampling technique. Data collected on the…
Descriptors: Foreign Countries, Secondary Education, Computer Science Education, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2021
Despite their drawback, multiple-choice questions are an enduring feature in instruction because they can be answered more rapidly than open response questions and they are easily scored. However, it can be difficult to generate good incorrect choices (called "distractors"). We designed an algorithm to generate distractors from a…
Descriptors: Semantics, Networks, Multiple Choice Tests, Teaching Methods
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7