NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 97 results Save | Export
Thompson, Kathryn N. – ProQuest LLC, 2023
It is imperative to collect validity evidence prior to interpreting and using test scores. During the process of collecting validity evidence, test developers should consider whether test scores are contaminated by sources of extraneous information. This is referred to as construct irrelevant variance, or the "degree to which test scores are…
Descriptors: Test Wiseness, Test Items, Item Response Theory, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mehmet Kanik – International Journal of Assessment Tools in Education, 2024
ChatGPT has surged interest to cause people to look for its use in different tasks. However, before allowing it to replace humans, its capabilities should be investigated. As ChatGPT has potential for use in testing and assessment, this study aims to investigate the questions generated by ChatGPT by comparing them to those written by a course…
Descriptors: Artificial Intelligence, Testing, Multiple Choice Tests, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Julie Dickson; Darren J. Shaw; Andrew Gardiner; Susan Rhind – Anatomical Sciences Education, 2024
Limited research has been conducted on the spatial ability of veterinary students and how this is evaluated within anatomy assessments. This study describes the creation and evaluation of a split design multiple-choice question (MCQ) assessment (totaling 30 questions divided into 15 non-spatial MCQs and 15 spatial MCQs). Two cohorts were tested,…
Descriptors: Anatomy, Spatial Ability, Multiple Choice Tests, Factor Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Büsra Kilinç; Mehmet Diyaddin Yasar – Science Insights Education Frontiers, 2024
In this study, it was aimed to develop an achievement test taking into account the subject acquisitions of the sound and properties unit in the sixth-grade science course. In the test development phase, firstly, literature review for the study was conducted. Then, 30 multiple choice questions in align with the subject acquisition in the 2018…
Descriptors: Science Tests, Test Construction, Grade 6, Science Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Güntay Tasçi – Science Insights Education Frontiers, 2024
The present study has aimed to develop and validate a protein concept inventory (PCI) consisting of 25 multiple-choice (MC) questions to assess students' understanding of protein, which is a fundamental concept across different biology disciplines. The development process of the PCI involved a literature review to identify protein-related content,…
Descriptors: Science Instruction, Science Tests, Multiple Choice Tests, Biology
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dina Kamber Hamzic; Mirsad Trumic; Ismar Hadžalic – International Electronic Journal of Mathematics Education, 2025
Trigonometry is an important part of secondary school mathematics, but it is usually challenging for students to understand and learn. Since trigonometry is learned and used at a university level in many fields, like physics or geodesy, it is important to have an insight into students' trigonometry knowledge before the beginning of the university…
Descriptors: Trigonometry, Mathematics Instruction, Prior Learning, Outcomes of Education
Peer reviewed Peer reviewed
Direct linkDirect link
Slepkov, A. D.; Van Bussel, M. L.; Fitze, K. M.; Burr, W. S. – SAGE Open, 2021
There is a broad literature in multiple-choice test development, both in terms of item-writing guidelines, and psychometric functionality as a measurement tool. However, most of the published literature concerns multiple-choice testing in the context of expert-designed high-stakes standardized assessments, with little attention being paid to the…
Descriptors: Foreign Countries, Undergraduate Students, Student Evaluation, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sheng, Yanyan – Measurement: Interdisciplinary Research and Perspectives, 2019
Classical approach to test theory has been the foundation for educational and psychological measurement for over 90 years. This approach concerns with measurement error and hence test reliability, which in part relies on individual test items. The CTT package, developed in light of this, provides functions for test- and item-level analyses of…
Descriptors: Item Response Theory, Test Reliability, Item Analysis, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rafi, Ibnu; Retnawati, Heri; Apino, Ezi; Hadiana, Deni; Lydiati, Ida; Rosyada, Munaya Nikma – Pedagogical Research, 2023
This study describes the characteristics of the test and its items used in the national-standardized school examination by applying classical test theory and focusing on the item difficulty, item discrimination, test reliability, and distractor analysis. We analyzed response data of 191 12th graders from one of public senior high schools in…
Descriptors: Foreign Countries, National Competency Tests, Standardized Tests, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cheewasukthaworn, Kanchana – PASAA: Journal of Language Teaching and Learning in Thailand, 2022
In 2016, the Office of the Higher Education Commission issued a directive requiring all higher education institutions in Thailand to have their students take a standardized English proficiency test. According to the directive, the test's results had to align with the Common European Framework of Reference for Languages (CEFR). In response to this…
Descriptors: Test Construction, Standardized Tests, Language Tests, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Omarov, Nazarbek Bakytbekovich; Mohammed, Aisha; Alghurabi, Ammar Muhi Khleel; Alallo, Hajir Mahmood Ibrahim; Ali, Yusra Mohammed; Hassan, Aalaa Yaseen; Demeuova, Lyazat; Viktorovna, Shvedova Irina; Nazym, Bekenova; Al Khateeb, Nashaat Sultan Afif – International Journal of Language Testing, 2023
The Multiple-choice (MC) item format is commonly used in educational assessments due to its economy and effectiveness across a variety of content domains. However, numerous studies have examined the quality of MC items in high-stakes and higher-education assessments and found many flawed items, especially in terms of distractors. These faulty…
Descriptors: Test Items, Multiple Choice Tests, Item Response Theory, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Carli, Marta; Lippiello, Stefania; Pantano, Ornella; Perona, Mario; Tormen, Giuseppe – Physical Review Physics Education Research, 2020
In this article, we discuss the development and the administration of a multiple-choice test, which we named "Test of Calculus and Vectors in Mathematics and Physics" (TCV-MP), aimed at comparing students' ability to answer questions on derivatives, integrals, and vectors in a purely mathematical context and in the context of physics.…
Descriptors: Mathematics Tests, Science Tests, Multiple Choice Tests, Calculus
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wiyasa, Putu Irmayanti; Laksana, I Ketut Darma; Indrawati, Ni Luh Ketut Mas – Education Quarterly Reviews, 2019
Teacher-developed test, in the form of multiple-choice questions has widely used for measuring the students' final learning process. Content validity and items analysis is used to evaluate the quality of the test that is developed by the teacher. The study focused on content validity that analyzed the content of the test and basic competency which…
Descriptors: Vocational High Schools, High School Teachers, Teacher Made Tests, Educational Quality
Peer reviewed Peer reviewed
Direct linkDirect link
Papenberg, Martin; Musch, Jochen – Applied Measurement in Education, 2017
In multiple-choice tests, the quality of distractors may be more important than their number. We therefore examined the joint influence of distractor quality and quantity on test functioning by providing a sample of 5,793 participants with five parallel test sets consisting of items that differed in the number and quality of distractors.…
Descriptors: Multiple Choice Tests, Test Items, Test Validity, Test Reliability
Perkins, Kyle; Frank, Eva – Online Submission, 2018
This paper presents item-analysis data to illustrate how to identify a set of internally consistent test items that differentiate or discriminate among examinees who are highly proficient and nonproficient on the construct of interest. Suggestions for analyzing the quality of test items are offered as well as a pedagogical approach to augment the…
Descriptors: Item Analysis, Test Items, Test Reliability, Kinetics
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7