Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 12 |
Descriptor
Accuracy | 12 |
Multiple Choice Tests | 12 |
Test Format | 12 |
Test Items | 7 |
Item Response Theory | 6 |
Foreign Countries | 4 |
Classification | 3 |
Computer Assisted Testing | 3 |
Ability | 2 |
Comparative Analysis | 2 |
Correlation | 2 |
More ▼ |
Source
Author
Akbar, Maruf | 1 |
Alamri, Aeshah | 1 |
Aryadoust, Vahid | 1 |
Choi, Jiwon | 1 |
Ehrich, John | 1 |
Fadillah, Sarah Meilani | 1 |
Falani, Ilham | 1 |
Fauskanger, Janne | 1 |
Foley, Brett | 1 |
Ha, Minsu | 1 |
Higham, Philip A. | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Research | 10 |
Reports - Evaluative | 2 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Elementary Education | 2 |
Secondary Education | 2 |
Early Childhood Education | 1 |
Grade 3 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Primary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
National Assessment Program… | 1 |
What Works Clearinghouse Rating
McGuire, Michael J. – International Journal for the Scholarship of Teaching and Learning, 2023
College students in a lower-division psychology course made metacognitive judgments by predicting and postdicting performance for true-false, multiple-choice, and fill-in-the-blank question sets on each of three exams. This study investigated which question format would result in the most accurate metacognitive judgments. Extending Koriat's (1997)…
Descriptors: Metacognition, Multiple Choice Tests, Accuracy, Test Format
Alamri, Aeshah; Higham, Philip A. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2022
Corrective feedback is often touted as a critical benefit to learning, boosting testing effects when retrieval is poor and reducing negative testing effects. Here, we explore the dark side of corrective feedback. In three experiments, we found that corrective feedback on multiple-choice (MC) practice questions is later endorsed as the answer to…
Descriptors: Feedback (Response), Multiple Choice Tests, Cues, Recall (Psychology)
Wolkowitz, Amanda A.; Foley, Brett; Zurn, Jared – Practical Assessment, Research & Evaluation, 2023
The purpose of this study is to introduce a method for converting scored 4-option multiple-choice (MC) items into scored 3-option MC items without re-pretesting the 3-option MC items. This study describes a six-step process for achieving this goal. Data from a professional credentialing exam was used in this study and the method was applied to 24…
Descriptors: Multiple Choice Tests, Test Items, Accuracy, Test Format
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Lee, Won-Chan; Kim, Stella Y.; Choi, Jiwon; Kang, Yujin – Journal of Educational Measurement, 2020
This article considers psychometric properties of composite raw scores and transformed scale scores on mixed-format tests that consist of a mixture of multiple-choice and free-response items. Test scores on several mixed-format tests are evaluated with respect to conditional and overall standard errors of measurement, score reliability, and…
Descriptors: Raw Scores, Item Response Theory, Test Format, Multiple Choice Tests
Falani, Ilham; Akbar, Maruf; Naga, Dali S. – International Journal of Instruction, 2020
This study compared the precision of ability estimation on different types of item response theory models for mixed-format data. Participants in this study were 1625 Junior High School Students in Depok, Indonesia. The mixed-format test was used to measure the students' ability in mathematics. The test used consists of multiple-choice and…
Descriptors: Foreign Countries, Junior High School Students, Ability, Item Response Theory
Schilling, Jim F. – Athletic Training Education Journal, 2019
Context: The accuracy of summative assessment scoring and discriminating the level of knowledge in subject matter is critical in fairness to learners in health care professional programs and to ensure stakeholders of competent providers. An evidence-based approach to determine examination quality for the assessment of applied knowledge is…
Descriptors: Athletics, Allied Health Occupations Education, Test Items, Questioning Techniques
Aryadoust, Vahid – Computer Assisted Language Learning, 2020
The aim of the present study is two-fold. First, it uses eye-tracking to investigate the dynamics of item reading, both in multiple choice and matching items, before and during two hearings of listening passages in a computerized while-listening performance (WLP) test. Second, it investigates answer changing during the two hearings, which include…
Descriptors: Eye Movements, Test Items, Secondary School Students, Reading Processes
Woodcock, Stuart; Howard, Steven J.; Ehrich, John – School Psychology, 2020
Standardized testing is ubiquitous in educational assessment, but questions have been raised about the extent to which these test scores accurately reflect students' genuine knowledge and skills. To more rigorously investigate this issue, the current study employed a within-subject experimental design to examine item format effects on primary…
Descriptors: Elementary School Students, Grade 3, Test Items, Test Format
Kim, Kerry J.; Meir, Eli; Pope, Denise S.; Wendel, Daniel – Journal of Educational Data Mining, 2017
Computerized classification of student answers offers the possibility of instant feedback and improved learning. Open response (OR) questions provide greater insight into student thinking and understanding than more constrained multiple choice (MC) questions, but development of automated classifiers is more difficult, often requiring training a…
Descriptors: Classification, Computer Assisted Testing, Multiple Choice Tests, Test Format
Wang, Zhen; Yao, Lihua – ETS Research Report Series, 2013
The current study used simulated data to investigate the properties of a newly proposed method (Yao's rater model) for modeling rater severity and its distribution under different conditions. Our study examined the effects of rater severity, distributions of rater severity, the difference between item response theory (IRT) models with rater effect…
Descriptors: Test Format, Test Items, Responses, Computation
Fauskanger, Janne; Mosvold, Reidar – North American Chapter of the International Group for the Psychology of Mathematics Education, 2012
The mathematical knowledge for teaching (MKT) measures have become widely used among researchers both within and outside the U.S. Despite the apparent success, the MKT measures and underlying framework have been subject to criticism. The multiple-choice format of the items has been criticized, and some critics have suggested that opening up the…
Descriptors: Foreign Countries, Elementary School Teachers, Secondary School Teachers, Mathematics Teachers