Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 18 |
Since 2006 (last 20 years) | 32 |
Descriptor
Gender Differences | 33 |
Test Format | 33 |
Test Items | 33 |
Foreign Countries | 17 |
Multiple Choice Tests | 17 |
Mathematics Tests | 10 |
Scores | 9 |
College Entrance Examinations | 7 |
Computer Assisted Testing | 7 |
Difficulty Level | 7 |
Item Analysis | 7 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 28 |
Reports - Research | 26 |
Reports - Evaluative | 6 |
Numerical/Quantitative Data | 2 |
Opinion Papers | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Turkey | 4 |
Germany | 3 |
Australia | 2 |
South Korea | 2 |
United States | 2 |
Europe | 1 |
Malaysia | 1 |
Maryland | 1 |
Norway | 1 |
Spain | 1 |
Switzerland | 1 |
More ▼ |
Laws, Policies, & Programs
Individuals with Disabilities… | 1 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
Program for International… | 3 |
National Assessment of… | 2 |
ACT Assessment | 1 |
Graduate Management Admission… | 1 |
Peabody Picture Vocabulary… | 1 |
SAT (College Admission Test) | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Gruss, Richard; Clemons, Josh – Journal of Computer Assisted Learning, 2023
Background: The sudden growth in online instruction due to COVID-19 restrictions has given renewed urgency to questions about remote learning that have remained unresolved. Web-based assessment software provides instructors an array of options for varying testing parameters, but the pedagogical impacts of some of these variations has yet to be…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Mathematics Tests
Opstad, Leiv – Athens Journal of Education, 2021
The discussion of whether multiple-choice questions can replace the traditional exam with essays and constructed questions in introductory courses has just started in Norway. There is not an easy answer. The findings depend on the pattern of the questions. Therefore, one must be careful in drawing conclusions. In this research, one will explore a…
Descriptors: Multiple Choice Tests, Essay Tests, Introductory Courses, Foreign Countries
Kurnaz-Adibatmaz, Fatma Betül; Yildiz, Hüseyin – Journal of Theoretical Educational Science, 2020
In this study logistic regression and Lord's Chi Square methods were used to research the items that have DIF. The study utilized Peabody Picture Vocabulary Test (PPVT). The original form of the PPVT includes four options. Three different forms (A, B and C) were formed by removing one of the distractors respectively. The original form of PPVT was…
Descriptors: Item Analysis, Test Items, Vocabulary, Verbal Ability
Shear, Benjamin R. – Journal of Educational Measurement, 2023
Large-scale standardized tests are regularly used to measure student achievement overall and for student subgroups. These uses assume tests provide comparable measures of outcomes across student subgroups, but prior research suggests score comparisons across gender groups may be complicated by the type of test items used. This paper presents…
Descriptors: Gender Bias, Item Analysis, Test Items, Achievement Tests
Boote, Stacy K.; Boote, David N.; Williamson, Steven – Cogent Education, 2021
Several decades of research suggesting differences in test performance across paper-based and computer-based assessments have been largely ameliorated through attention to test presentation equivalence, though no studies to date have focused on graph comprehension items. Test items requiring graph comprehension are increasingly common but may be…
Descriptors: Graduate Students, Masters Programs, Business Administration Education, Graphs
Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020
Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…
Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format
Karadag, Nejdet; Boz Yuksekdag, Belgin; Akyildiz, Murat; Ibileme, Ali Ihsan – Turkish Online Journal of Distance Education, 2021
The aim of this study is to determine students' opinions about open-ended question exam practice during 2018-2019 academic year for the following programs of Anadolu University Open Education System: Economy, Hospitality Management, Philosophy, History, Sociology, and Turkish Language and Literature. The study was designed as a quantitative study…
Descriptors: Test Format, Test Items, Response Style (Tests), Scoring
Ackermann, Nicole; Siegfried, Christin – Citizenship, Social and Economics Education, 2019
Studies indicate that male students outperform female students in economic literacy and that a specific item format (selected-response, constructed-response) favours either males or females. This study analyses the relationship between item format and gender in economic-civic competence using the WBK-T2 test ("revidierter Test zur…
Descriptors: Foreign Countries, Test Format, Economics, Knowledge Level
Akhavan Masoumi, Ghazal; Sadeghi, Karim – Language Testing in Asia, 2020
This study aimed to examine the effect of test format on test performance by comparing Multiple Choice (MC) and Constructed Response (CR) vocabulary tests in an EFL setting. Also, this paper investigated the function of gender in MC and CR vocabulary measures. To this end, five 20-item stem-equivalent vocabulary tests (CR, and 3-, 4-, 5-, and…
Descriptors: Language Tests, Test Items, English (Second Language), Second Language Learning
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Zehner, Fabian; Goldhammer, Frank; Lubaway, Emily; Sälzer, Christine – Education Inquiry, 2019
In 2015, the "Programme for International Student Assessment" (PISA) introduced multiple changes in its study design, the most extensive being the transition from paper- to computer-based assessment. We investigated the differences between German students' text responses to eight reading items from the paper-based study in 2012 to text…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Polat, Murat – Novitas-ROYAL (Research on Youth and Language), 2020
Classroom practices, materials and teaching methods in language classes have changed a lot in the last decades and continue to evolve; however, the commonly used techniques to test students' foreign language skills have not changed much regardless of the recent awareness in Bloom's taxonomy. Testing units at schools rely mostly on multiple choice…
Descriptors: Multiple Choice Tests, Test Format, Test Items, Difficulty Level
Reardon, Sean F.; Kalogrides, Demetra; Fahle, Erin M.; Podolsky, Anne; Zárate, Rosalía C. – Educational Researcher, 2018
Prior research suggests that males outperform females, on average, on multiple-choice items compared to their relative performance on constructed-response items. This paper characterizes the extent to which gender achievement gaps on state accountability tests across the United States are associated with those tests' item formats. Using roughly 8…
Descriptors: Test Items, Test Format, Gender Differences, Achievement Gap
Nagy, Gabriel; Nagengast, Benjamin; Frey, Andreas; Becker, Michael; Rose, Norman – Assessment in Education: Principles, Policy & Practice, 2019
Position effects (PE) cause decreasing probabilities of correct item responses towards the end of a test. We analysed PEs in science, mathematics and reading tests administered in the German extension to the PISA 2006 study with respect to their variability at the student- and school-level. PEs were strongest in reading and weakest in mathematics.…
Descriptors: Achievement Tests, Foreign Countries, Secondary School Students, International Assessment