NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Thayaamol Upapong; Apantee Poonputta – Educational Process: International Journal, 2025
Background/purpose: The purposes of this research are to develop a reliable and valid assessment tool for measuring systems thinking skills in upper primary students in Thailand and to establish a normative criterion for evaluating their systems thinking abilities based on educational standards. Materials/methods: The study followed a three-phase…
Descriptors: Thinking Skills, Elementary School Students, Measures (Individuals), Foreign Countries
Alicia A. Stoltenberg – ProQuest LLC, 2024
Multiple-select multiple-choice items, or multiple-choice items with more than one correct answer, are used to quickly assess content on standardized assessments. Because there are multiple keys to these item types, there are also multiple ways to score student responses to these items. The purpose of this study was to investigate how changing the…
Descriptors: Scoring, Evaluation Methods, Multiple Choice Tests, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Trevor I.; Bendjilali, Nasrine – Physical Review Physics Education Research, 2022
Several recent studies have employed item response theory (IRT) to rank incorrect responses to commonly used research-based multiple-choice assessments. These studies use Bock's nominal response model (NRM) for applying IRT to categorical (nondichotomous) data, but the response rankings only utilize half of the parameters estimated by the model.…
Descriptors: Item Response Theory, Test Items, Multiple Choice Tests, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Krall, Geoff – Canadian Journal of Science, Mathematics and Technology Education, 2023
In order to identify the potential benefits and challenges of implementing student portfolios as quality mathematics assessment, a pilot study was conducted with teachers in various secondary school settings. The multi-case study consisted of five teacher participants from geographically and demographically differing contexts, four in the USA and…
Descriptors: Portfolio Assessment, Mathematics Instruction, Evaluation Methods, Pilot Projects
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Mark; Breakstone, Joel; Wineburg, Sam – Cognition and Instruction, 2019
This article reports a validity study of History Assessments of Thinking (HATs), which are short, constructed-response assessments of historical thinking. In particular, this study focuses on aspects of cognitive validity, which is an examination of whether assessments tap the intended constructs. Think-aloud interviews with 26 high school…
Descriptors: History, History Instruction, Thinking Skills, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Malec, Wojciech; Krzeminska-Adamek, Malgorzata – Practical Assessment, Research & Evaluation, 2020
The main objective of the article is to compare several methods of evaluating multiple-choice options through classical item analysis. The methods subjected to examination include the tabulation of choice distribution, the interpretation of trace lines, the point-biserial correlation, the categorical analysis of trace lines, and the investigation…
Descriptors: Comparative Analysis, Evaluation Methods, Multiple Choice Tests, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Towns, Marcy H. – Journal of Chemical Education, 2014
Chemistry faculty members are highly skilled in obtaining, analyzing, and interpreting physical measurements, but often they are less skilled in measuring student learning. This work provides guidance for chemistry faculty from the research literature on multiple-choice item development in chemistry. Areas covered include content, stem, and…
Descriptors: Multiple Choice Tests, Test Construction, Psychometrics, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kachchaf, Rachel; Noble, Tracy; Rosebery, Ann; Wang, Yang; Warren, Beth; O'Connor, Mary Catherine – Grantee Submission, 2014
Most research on linguistic features of test items negatively impacting English language learners' (ELLs') performance has focused on lexical and syntactic features, rather than discourse features that operate at the level of the whole item. This mixed-methods study identified two discourse features in 162 multiple-choice items on a standardized…
Descriptors: English Language Learners, Science Tests, Test Items, Discourse Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Barniol, Pablo; Zavala, Genaro – Physical Review Special Topics - Physics Education Research, 2014
In this article we discuss the findings of our research on students' understanding of vector concepts in problems without physical context. First, we develop a complete taxonomy of the most frequent errors made by university students when learning vector concepts. This study is based on the results of several test administrations of open-ended…
Descriptors: Multiple Choice Tests, Geometric Concepts, Algebra, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Akarsu, Bayram – European Journal of Physics Education, 2012
Physics educators around the world often need reliable diagnostic materials to measure students' understanding of physics concept in high school. The purpose of this study is to evaluate a new diagnostic tool on High School Optics concept. Test of Conceptual Understanding on High School Optics (TOCUSO) consists of 25 conceptual items that measures…
Descriptors: High Schools, Secondary School Science, Optics, Concept Teaching
Kaliski, Pamela; Huff, Kristen; Barry, Carol – College Board, 2011
For educational achievement tests that employ multiple-choice (MC) items and aim to reliably classify students into performance categories, it is critical to design MC items that are capable of discriminating student performance according to the stated achievement levels. This is accomplished, in part, by clearly understanding how item design…
Descriptors: Alignment (Education), Academic Achievement, Expertise, Evaluative Thinking
Bietau, Lisa Artman – ProQuest LLC, 2011
A foundational mission of our public schools is dedicated to preserving a democratic republic dependent on a literate and actively engaged citizenry. Civic literacy is essential to supporting the rights and responsibilities of all citizens in a democratic society. Civic knowledge is the foundation of our citizens' civic literacy. National…
Descriptors: National Standards, Test Items, Feedback (Response), Citizenship
Peer reviewed Peer reviewed
Direct linkDirect link
Hammann, Marcus; Phan, Thi Thanh Hoi; Ehmer, Maike; Grimm, Tobias – Journal of Biological Education, 2008
This study is concerned with different forms of assessment of pupils' skills in experimentation. The findings of three studies are reported. Study 1 investigates whether it is possible to develop reliable multiple-choice tests for the skills of forming hypotheses, designing experiments and analysing experimental data. Study 2 compares scores from…
Descriptors: Multiple Choice Tests, Experiments, Science Process Skills, Skill Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ingram, Ella L.; Nelson, Craig E. – American Biology Teacher, 2006
Multiple choice questions are a common method of assessing student understanding. In this article, the authors discuss and evaluate a student-focused collaborative learning strategy for working with such questions that results in greater student learning and allows instructors to better understand student thinking and ultimately to write better…
Descriptors: Multiple Choice Tests, Misconceptions, Cooperative Learning, Teaching Methods
Puhan, Gautam; Boughton, Keith; Kim, Sooyeon – Journal of Technology, Learning, and Assessment, 2007
The study evaluated the comparability of two versions of a certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). An effect size measure known as Cohen's d and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that the effect…
Descriptors: Computer Assisted Testing, Effect Size, Test Bias, Mathematics Tests
Previous Page | Next Page ยป
Pages: 1  |  2