NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 30 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lang, Joseph B. – Journal of Educational and Behavioral Statistics, 2023
This article is concerned with the statistical detection of copying on multiple-choice exams. As an alternative to existing permutation- and model-based copy-detection approaches, a simple randomization p-value (RP) test is proposed. The RP test, which is based on an intuitive match-score statistic, makes no assumptions about the distribution of…
Descriptors: Identification, Cheating, Multiple Choice Tests, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Uysal, Ibrahim; Sahin-Kürsad, Merve; Kiliç, Abdullah Faruk – Participatory Educational Research, 2022
The aim of the study was to examine the common items in the mixed format (e.g., multiple-choices and essay items) contain parameter drifts in the test equating processes performed with the common item nonequivalent groups design. In this study, which was carried out using Monte Carlo simulation with a fully crossed design, the factors of test…
Descriptors: Test Items, Test Format, Item Response Theory, Equated Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sunbul, Onder; Yormaz, Seha – International Journal of Evaluation and Research in Education, 2018
In this study Type I Error and the power rates of omega (?) and GBT (generalized binomial test) indices were investigated for several nominal alpha levels and for 40 and 80-item test lengths with 10,000-examinee sample size under several test level restrictions. As a result, Type I error rates of both indices were found to be below the acceptable…
Descriptors: Difficulty Level, Cheating, Duplication, Test Length
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yormaz, Seha; Sünbül, Önder – Educational Sciences: Theory and Practice, 2017
This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…
Descriptors: Statistical Analysis, Cheating, Multiple Choice Tests, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Garaschuk, Kseniya M.; Cytrynbaum, Eric N. – PRIMUS, 2019
Active learning techniques, such as peer instruction and group work, have been gaining a lot of traction in universities. Taking a natural next step in re-evaluating current practices, many institutions recently started experimenting student-centred group exams. In order to assess the feasibility and effectiveness of collaborative assessments, we…
Descriptors: Instructional Effectiveness, Mathematics Instruction, Group Testing, Group Activities
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bae, Minryoung; Lee, Byungmin – English Teaching, 2018
This study examines the effects of text length and question type on Korean EFL readers' reading comprehension of the fill-in-the-blank items in Korean CSAT. A total of 100 Korean EFL college students participated in the study. After divided into three different proficiency groups, the participants took a reading comprehension test which consisted…
Descriptors: Test Items, Language Tests, Second Language Learning, Second Language Instruction
Wang, Wei – ProQuest LLC, 2013
Mixed-format tests containing both multiple-choice (MC) items and constructed-response (CR) items are now widely used in many testing programs. Mixed-format tests often are considered to be superior to tests containing only MC items although the use of multiple item formats leads to measurement challenges in the context of equating conducted under…
Descriptors: Equated Scores, Test Format, Test Items, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Fitzpatrick, Anne R. – Educational Measurement: Issues and Practice, 2008
Examined in this study were the effects of reducing anchor test length on student proficiency rates for 12 multiple-choice tests administered in an annual, large-scale, high-stakes assessment. The anchor tests contained 15 items, 10 items, or five items. Five content representative samples of items were drawn at each anchor test length from a…
Descriptors: Test Length, Multiple Choice Tests, Item Sampling, Student Evaluation
Bay, Luz – 1995
An index is proposed to detect cheating on multiple-choice examinations, and its use is evaluated through simulations. The proposed index is based on the compound binomial distribution. In total, 360 simulated data sets reflecting 12 different cheating (copying) situations were obtained and used for the study of the sensitivity of the index in…
Descriptors: Cheating, Class Size, Identification, Multiple Choice Tests
Peer reviewed Peer reviewed
Serlin, Ronald C.; Kaiser, Henry F. – Educational and Psychological Measurement, 1978
When multiple-choice tests are scored in the usual manner, giving each correct answer one point, information concerning response patterns is lost. A method for utilizing this information is suggested. An example is presented and compared with two conventional methods of scoring. (Author/JKS)
Descriptors: Correlation, Factor Analysis, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Owen, Steven V.; Froman, Robin D. – Educational and Psychological Measurement, 1987
To test further for efficacy of three-option achievement items, parallel three- and five-option item tests were distributed randomly to college students. Results showed no differences in mean item difficulty, mean discrimination or total test score, but a substantial reduction in time spent on three-option items. (Author/BS)
Descriptors: Achievement Tests, Higher Education, Multiple Choice Tests, Test Format
Yamamoto, Kentaro – 1995
The traditional indicator of test speededness, missing responses, clearly indicates a lack of time to respond (thereby indicating the speededness of the test), but it is inadequate for evaluating speededness in a multiple-choice test scored as number correct, and it underestimates test speededness. Conventional item response theory (IRT) parameter…
Descriptors: Ability, Estimation (Mathematics), Item Response Theory, Multiple Choice Tests
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1982
When determining criterion-referenced test length, problems of guessing are shown to be more serious than expected. A new method of scoring is presented that corrects for guessing without assuming that guessing is random. Empirical investigations of the procedure are examined. Test length can be substantially reduced. (Author/CM)
Descriptors: Criterion Referenced Tests, Guessing (Tests), Multiple Choice Tests, Scoring
Kennedy, Rob – 1994
The purpose of this study was to investigate the relationship between the scores students earned on multiple choice tests and the number of minutes students required to complete the tests. The 5 tests were made up of 20 randomly drawn questions from a large pool of questions about research methods. Students were allowed an unlimited amount of time…
Descriptors: Graduate Students, Graduate Study, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Roberts, Dennis M. – Journal of Educational Measurement, 1987
This study examines a score-difference model for the detection of cheating based on the difference between two scores for an examinee: one based on the appropriate scoring key and another based on an alternative, inappropriate key. It argues that the score-difference method could falsely accuse students as cheaters. (Author/JAZ)
Descriptors: Answer Keys, Cheating, Mathematical Models, Multiple Choice Tests
Previous Page | Next Page »
Pages: 1  |  2