Publication Date
In 2025 | 1 |
Since 2024 | 12 |
Since 2021 (last 5 years) | 66 |
Since 2016 (last 10 years) | 163 |
Since 2006 (last 20 years) | 287 |
Descriptor
Guessing (Tests) | 690 |
Multiple Choice Tests | 282 |
Test Items | 186 |
Item Response Theory | 118 |
Test Reliability | 116 |
Scores | 101 |
Scoring Formulas | 100 |
Test Validity | 97 |
Foreign Countries | 95 |
Difficulty Level | 93 |
Response Style (Tests) | 93 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Researchers | 22 |
Practitioners | 11 |
Teachers | 6 |
Students | 2 |
Administrators | 1 |
Parents | 1 |
Location
Australia | 8 |
United Kingdom | 6 |
Canada | 5 |
China | 5 |
Germany | 5 |
Florida | 4 |
Malaysia | 4 |
Nigeria | 4 |
Turkey | 4 |
California | 3 |
Cyprus | 3 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
Wang, Chun; Xu, Gongjun; Shang, Zhuoran; Kuncel, Nathan – Journal of Educational and Behavioral Statistics, 2018
The modern web-based technology greatly popularizes computer-administered testing, also known as online testing. When these online tests are administered continuously within a certain "testing window," many items are likely to be exposed and compromised, posing a type of test security concern. In addition, if the testing time is limited,…
Descriptors: Computer Assisted Testing, Cheating, Guessing (Tests), Item Response Theory
A Sequential Bayesian Changepoint Detection Procedure for Aberrant Behaviors in Computerized Testing
Jing Lu; Chun Wang; Jiwei Zhang; Xue Wang – Grantee Submission, 2023
Changepoints are abrupt variations in a sequence of data in statistical inference. In educational and psychological assessments, it is pivotal to properly differentiate examinees' aberrant behaviors from solution behavior to ensure test reliability and validity. In this paper, we propose a sequential Bayesian changepoint detection algorithm to…
Descriptors: Bayesian Statistics, Behavior Patterns, Computer Assisted Testing, Accuracy
Sahin, Füsun; Colvin, Kimberly F. – Large-scale Assessments in Education, 2020
The item responses of examinees who rapid-guess, who do not spend enough time reading and engaging with an item, will not reflect their true ability on that item. Rapid-disengagement refers to rapidly selecting a response to multiple-choice items (i.e., rapid-guess), omitting items, or providing short-unrelated answers to open-ended items in an…
Descriptors: Guessing (Tests), Item Response Theory, Reaction Time, Learner Engagement
Cipriani, Giam Pietro – Journal of Economic Education, 2018
A considerable literature in economics and psychology observes substantial gender differences in risk aversion, confidence, and responses to high pressure. In the educational measurement literature, it has been argued that these differences could disadvantage female students when taking multiple-choice tests, especially if there is a penalty for…
Descriptors: Gender Differences, Guessing (Tests), Academic Failure, Multiple Choice Tests
Abulela, Mohammed A. A.; Rios, Joseph A. – Applied Measurement in Education, 2022
When there are no personal consequences associated with test performance for examinees, rapid guessing (RG) is a concern and can differ between subgroups. To date, the impact of differential RG on item-level measurement invariance has received minimal attention. To that end, a simulation study was conducted to examine the robustness of the…
Descriptors: Comparative Analysis, Robustness (Statistics), Nonparametric Statistics, Item Analysis
Jia, Bing; He, Dan; Zhu, Zhemin – Problems of Education in the 21st Century, 2020
The quality of multiple-choice questions (MCQs) as well as the student's solve behavior in MCQs are educational concerns. MCQs cover wide educational content and can be immediately and accurately scored. However, many studies have found some flawed items in this exam type, thereby possibly resulting in misleading insights into students'…
Descriptors: Foreign Countries, Multiple Choice Tests, Test Items, Item Response Theory
Haladyna, Thomas M.; Rodriguez, Michael C.; Stevens, Craig – Applied Measurement in Education, 2019
The evidence is mounting regarding the guidance to employ more three-option multiple-choice items. From theoretical analyses, empirical results, and practical considerations, such items are of equal or higher quality than four- or five-option items, and more items can be administered to improve content coverage. This study looks at 58 tests,…
Descriptors: Multiple Choice Tests, Test Items, Testing Problems, Guessing (Tests)
Cavik, Ebru Ezberci; Kurnaz, Mehmet Altan – Universal Journal of Educational Research, 2019
This study aims to obtain information about the modelling situations related to the topic by performing concentration analysis of teacher candidates' responses to Force Concept Inventory (FCI). The research was conducted using survey model, which is a quantitative research method. The study was carried out in the fall semester of the academic year…
Descriptors: Foreign Countries, Science Teachers, Preservice Teachers, Knowledge Level
Guo, Hongwen; Zu, Jiyun; Kyllonen, Patrick – ETS Research Report Series, 2018
For a multiple-choice test under development or redesign, it is important to choose the optimal number of options per item so that the test possesses the desired psychometric properties. On the basis of available data for a multiple-choice assessment with 8 options, we evaluated the effects of changing the number of options on test properties…
Descriptors: Multiple Choice Tests, Test Items, Simulation, Test Construction
Drabinová, Adéla; Martinková, Patrícia – Journal of Educational Measurement, 2017
In this article we present a general approach not relying on item response theory models (non-IRT) to detect differential item functioning (DIF) in dichotomous items with presence of guessing. The proposed nonlinear regression (NLR) procedure for DIF detection is an extension of method based on logistic regression. As a non-IRT approach, NLR can…
Descriptors: Test Items, Regression (Statistics), Guessing (Tests), Identification
Childs, Ruth A.; Elgie, Susan; Brijmohan, Amanda; Yang, Jinli – Canadian Journal of Education, 2019
Fifth-grade students watched a short video and then responded to multiple-choice items, including several without correct answers. Based on computer-supported stimulated recall and semi-structured interviews, we tested three common assumptions about what students are thinking when they respond to multiple-choice items in spite of being uncertain…
Descriptors: Elementary School Students, Grade 5, Student Attitudes, Ambiguity (Context)
Joseph, Dane Christian – Journal of Effective Teaching in Higher Education, 2019
Multiple-choice testing is a staple within the U.S. higher education system. From classroom assessments to standardized entrance exams such as the GRE, GMAT, or LSAT, test developers utilize a variety of validated and heuristic driven item-writing guidelines. One such guideline that has been given recent attention is to randomize the position of…
Descriptors: Test Construction, Multiple Choice Tests, Guessing (Tests), Test Wiseness
Wang, Xiaolin; Svetina, Dubravka; Dai, Shenghai – Journal of Experimental Education, 2019
Recently, interest in test subscore reporting for diagnosis purposes has been growing rapidly. The two simulation studies here examined factors (sample size, number of subscales, correlation between subscales, and three factors affecting subscore reliability: number of items per subscale, item parameter distribution, and data generating model)…
Descriptors: Value Added Models, Scores, Sample Size, Correlation
Kuhfeld, Megan; Soland, James – Journal of Research on Educational Effectiveness, 2020
Educational stakeholders have long known that students might not be fully engaged when taking an achievement test and that such disengagement could undermine the inferences drawn from observed scores. Thanks to the growing prevalence of computer-based tests and the new forms of metadata they produce, researchers have developed and validated…
Descriptors: Metadata, Computer Assisted Testing, Achievement Tests, Reaction Time
Michaelides, Michalis P.; Ivanova, Militsa; Nicolaou, Christiana – International Journal of Testing, 2020
The study examined the relationship between examinees' test-taking effort and their accuracy rate on items from the PISA 2015 assessment. The 10% normative threshold method was applied on Science multiple-choice items in the Cyprus sample to detect rapid guessing behavior. Results showed that the extent of rapid guessing across simple and complex…
Descriptors: Accuracy, Multiple Choice Tests, International Assessment, Achievement Tests