Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 28 |
Since 2006 (last 20 years) | 45 |
Descriptor
Difficulty Level | 93 |
Guessing (Tests) | 93 |
Test Items | 70 |
Multiple Choice Tests | 36 |
Item Response Theory | 35 |
Item Analysis | 20 |
Comparative Analysis | 17 |
Foreign Countries | 15 |
Simulation | 15 |
Test Construction | 15 |
Scores | 14 |
More ▼ |
Source
Author
Lord, Frederic M. | 4 |
Andrich, David | 3 |
Marais, Ida | 3 |
Smith, Richard M. | 3 |
Wise, Steven L. | 3 |
Crosson, Amy C. | 2 |
DeMars, Christine E. | 2 |
Donlon, Thomas F. | 2 |
Guo, Hongwen | 2 |
Hsu, Tse-Chi | 2 |
McKeown, Margaret G. | 2 |
More ▼ |
Publication Type
Education Level
Higher Education | 9 |
Postsecondary Education | 8 |
Secondary Education | 8 |
Elementary Education | 3 |
Junior High Schools | 2 |
Middle Schools | 2 |
Elementary Secondary Education | 1 |
Grade 3 | 1 |
Grade 4 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
More ▼ |
Audience
Researchers | 4 |
Location
China | 2 |
Indonesia | 2 |
Nigeria | 2 |
United Kingdom | 2 |
Australia | 1 |
California | 1 |
Hong Kong | 1 |
Indiana | 1 |
Jordan | 1 |
Netherlands | 1 |
Thailand | 1 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Joseph A. Rios; Jiayi Deng – Educational and Psychological Measurement, 2025
To mitigate the potential damaging consequences of rapid guessing (RG), a form of noneffortful responding, researchers have proposed a number of scoring approaches. The present simulation study examines the robustness of the most popular of these approaches, the unidimensional effort-moderated (EM) scoring procedure, to multidimensional RG (i.e.,…
Descriptors: Scoring, Guessing (Tests), Reaction Time, Item Response Theory
Jin, Kuan-Yu; Siu, Wai-Lok; Huang, Xiaoting – Journal of Educational Measurement, 2022
Multiple-choice (MC) items are widely used in educational tests. Distractor analysis, an important procedure for checking the utility of response options within an MC item, can be readily implemented in the framework of item response theory (IRT). Although random guessing is a popular behavior of test-takers when answering MC items, none of the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Item Response Theory, Attention
Wise, Steven L. – Applied Measurement in Education, 2020
In achievement testing there is typically a practical requirement that the set of items administered should be representative of some target content domain. This is accomplished by establishing test blueprints specifying the content constraints to be followed when selecting the items for a test. Sometimes, however, students give disengaged…
Descriptors: Test Items, Test Content, Achievement Tests, Guessing (Tests)
Gustafsson, Martin; Barakat, Bilal Fouad – Comparative Education Review, 2023
International assessments inform education policy debates, yet little is known about their floor effects: To what extent do they fail to differentiate between the lowest performers, and what are the implications of this? TIMSS, SACMEQ, and LLECE data are analyzed to answer this question. In TIMSS, floor effects have been reduced through the…
Descriptors: Achievement Tests, Elementary Secondary Education, International Assessment, Foreign Countries
Agus Santoso; Heri Retnawati; Timbul Pardede; Ibnu Rafi; Munaya Nikma Rosyada; Gulzhaina K. Kassymova; Xu Wenxin – Practical Assessment, Research & Evaluation, 2024
The test blueprint is important in test development, where it guides the test item writer in creating test items according to the desired objectives and specifications or characteristics (so-called a priori item characteristics), such as the level of item difficulty in the category and the distribution of items based on their difficulty level.…
Descriptors: Foreign Countries, Undergraduate Students, Business English, Test Construction
van den Broek, Gesa S. E.; Gerritsen, Suzanne L.; Oomen, Iris T. J.; Velthoven, Eva; van Boxtel, Femke H. J.; Kester, Liesbeth; van Gog, Tamara – Journal of Educational Psychology, 2023
Multiple-choice questions (MCQs) are popular in vocabulary software because they can be scored automatically and are compatible with many input devices (e.g., touchscreens). Answering MCQs is beneficial for learning, especially when learners retrieve knowledge from memory to evaluate plausible answer alternatives. However, such retrieval may not…
Descriptors: Multiple Choice Tests, Vocabulary Development, Test Format, Cues
Hayat, Bahrul – Cogent Education, 2022
The purpose of this study comprises (1) calibrating the Basic Statistics Test for Indonesian undergraduate psychology students using the Rasch model, (2) testing the impact of adjustment for guessing on item parameters, person parameters, test reliability, and distribution of item difficulty and person ability, and (3) comparing person scores…
Descriptors: Guessing (Tests), Statistics Education, Undergraduate Students, Psychology
Saatcioglu, Fatima Munevver; Atar, Hakan Yavuz – International Journal of Assessment Tools in Education, 2022
This study aims to examine the effects of mixture item response theory (IRT) models on item parameter estimation and classification accuracy under different conditions. The manipulated variables of the simulation study are set as mixture IRT models (Rasch, 2PL, 3PL); sample size (600, 1000); the number of items (10, 30); the number of latent…
Descriptors: Accuracy, Classification, Item Response Theory, Programming Languages
Sideridis, Georgios; Tsaousis, Ioannis; Al-Harbi, Khaleel – Educational and Psychological Measurement, 2022
The goal of the present study was to address the analytical complexity of incorporating responses and response times through applying the Jeon and De Boeck mixture item response theory model in Mplus 8.7. Using both simulated and real data, we attempt to identify subgroups of responders that are rapid guessers or engage knowledge retrieval…
Descriptors: Reaction Time, Guessing (Tests), Item Response Theory, Information Retrieval
Aborisade, Olatunbosun James; Fajobi, Olutoyin Olufunke – Educational Research and Reviews, 2020
West Africa Examination Council (WAEC) and National Examination Council (NECO) are the two major examination bodies saddled with the responsibility of awarding Senior Secondary School Certificate in Nigeria. This study examined the comparability of the psychometric properties of the items constructed by the two examination bodies using Item…
Descriptors: Foreign Countries, Mathematics Tests, Psychometrics, Test Items
Abulela, Mohammed A. A.; Rios, Joseph A. – Applied Measurement in Education, 2022
When there are no personal consequences associated with test performance for examinees, rapid guessing (RG) is a concern and can differ between subgroups. To date, the impact of differential RG on item-level measurement invariance has received minimal attention. To that end, a simulation study was conducted to examine the robustness of the…
Descriptors: Comparative Analysis, Robustness (Statistics), Nonparametric Statistics, Item Analysis
Jia, Bing; He, Dan; Zhu, Zhemin – Problems of Education in the 21st Century, 2020
The quality of multiple-choice questions (MCQs) as well as the student's solve behavior in MCQs are educational concerns. MCQs cover wide educational content and can be immediately and accurately scored. However, many studies have found some flawed items in this exam type, thereby possibly resulting in misleading insights into students'…
Descriptors: Foreign Countries, Multiple Choice Tests, Test Items, Item Response Theory
Guo, Hongwen; Zu, Jiyun; Kyllonen, Patrick – ETS Research Report Series, 2018
For a multiple-choice test under development or redesign, it is important to choose the optimal number of options per item so that the test possesses the desired psychometric properties. On the basis of available data for a multiple-choice assessment with 8 options, we evaluated the effects of changing the number of options on test properties…
Descriptors: Multiple Choice Tests, Test Items, Simulation, Test Construction
Wang, Xiaolin; Svetina, Dubravka; Dai, Shenghai – Journal of Experimental Education, 2019
Recently, interest in test subscore reporting for diagnosis purposes has been growing rapidly. The two simulation studies here examined factors (sample size, number of subscales, correlation between subscales, and three factors affecting subscore reliability: number of items per subscale, item parameter distribution, and data generating model)…
Descriptors: Value Added Models, Scores, Sample Size, Correlation
Andrich, David; Marais, Ida – Journal of Educational Measurement, 2018
Even though guessing biases difficulty estimates as a function of item difficulty in the dichotomous Rasch model, assessment programs with tests which include multiple-choice items often construct scales using this model. Research has shown that when all items are multiple-choice, this bias can largely be eliminated. However, many assessments have…
Descriptors: Multiple Choice Tests, Test Items, Guessing (Tests), Test Bias