Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 9 |
Descriptor
Cheating | 9 |
Item Response Theory | 5 |
Multiple Choice Tests | 4 |
Statistical Analysis | 4 |
Test Items | 4 |
Adaptive Testing | 3 |
Computation | 3 |
Computer Assisted Testing | 3 |
Foreign Countries | 3 |
High Stakes Tests | 3 |
Simulation | 3 |
More ▼ |
Source
Applied Psychological… | 9 |
Author
Publication Type
Journal Articles | 9 |
Reports - Research | 4 |
Reports - Evaluative | 3 |
Reports - Descriptive | 2 |
Education Level
Higher Education | 3 |
Audience
Location
Netherlands | 2 |
Canada | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kalender, Ilker – Applied Psychological Measurement, 2012
catcher is a software program designed to compute the [omega] index, a common statistical index for the identification of collusions (cheating) among examinees taking an educational or psychological test. It requires (a) responses and (b) ability estimations of individuals, and (c) item parameters to make computations and outputs the results of…
Descriptors: Computer Software, Computation, Statistical Analysis, Cheating
Jurich, Daniel P.; DeMars, Christine E.; Goodman, Joshua T. – Applied Psychological Measurement, 2012
The prevalence of high-stakes test scores as a basis for significant decisions necessitates the dissemination of accurate and fair scores. However, the magnitude of these decisions has created an environment in which examinees may be prone to resort to cheating. To reduce the risk of cheating, multiple test forms are commonly administered. When…
Descriptors: High Stakes Tests, Scores, Prevention, Cheating
Chen, Shu-Ying – Applied Psychological Measurement, 2010
To date, exposure control procedures that are designed to control test overlap in computerized adaptive tests (CATs) are based on the assumption of item sharing between pairs of examinees. However, in practice, examinees may obtain test information from more than one previous test taker. This larger scope of information sharing needs to be…
Descriptors: Computer Assisted Testing, Adaptive Testing, Methods, Test Items
Belov, Dmitry I.; Armstrong, Ronald D. – Applied Psychological Measurement, 2010
This article presents a new method to detect copying on a standardized multiple-choice exam. The method combines two statistical approaches in successive stages. The first stage uses Kullback-Leibler divergence to identify examinees, called subjects, who have demonstrated inconsistent performance during an exam. For each subject the second stage…
Descriptors: Multiple Choice Tests, Cheating, Statistical Analysis, Monte Carlo Methods
Tendeiro, Jorge N.; Meijer, Rob R. – Applied Psychological Measurement, 2012
This article extends the work by Armstrong and Shi on CUmulative SUM (CUSUM) person-fit methodology. The authors present new theoretical considerations concerning the use of CUSUM person-fit statistics based on likelihood ratios for the purpose of detecting cheating and random guessing by individual test takers. According to the Neyman-Pearson…
Descriptors: Cheating, Individual Testing, Adaptive Testing, Statistics
Belov, Dmitry I. – Applied Psychological Measurement, 2011
This article presents the Variable Match Index (VM-Index), a new statistic for detecting answer copying. The power of the VM-Index relies on two-dimensional conditioning as well as the structure of the test. The asymptotic distribution of the VM-Index is analyzed by reduction to Poisson trials. A computational study comparing the VM-Index with the…
Descriptors: Cheating, Journal Articles, Computation, Comparative Analysis
Fox, Jean-Paul; Meijer, Rob R. – Applied Psychological Measurement, 2008
The authors discuss a new method that combines the randomized response technique with item response theory. This method allows the researcher to obtain information at the individual person level without knowing the true responses. With this new method, it is possible to compare groups of individuals by means of analysis of variance or regression…
Descriptors: Psychological Evaluation, Educational Assessment, Item Response Theory, Statistical Analysis
Yi, Qing; Zhang, Jinming; Chang, Hua-Hua – Applied Psychological Measurement, 2008
Criteria had been proposed for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria resulted from theoretical derivations that assumed uniformly randomized item selection. This study investigated potential damage caused by organized item theft in computerized adaptive…
Descriptors: Test Items, Simulation, Item Analysis, Safety
Sotaridona, Leonardo S.; van der Linden, Wim J.; Meijer, Rob R. – Applied Psychological Measurement, 2006
A statistical test for detecting answer copying on multiple-choice tests based on Cohen's kappa is proposed. The test is free of any assumptions on the response processes of the examinees suspected of copying and having served as the source, except for the usual assumption that these processes are probabilistic. Because the asymptotic null and…
Descriptors: Cheating, Test Items, Simulation, Statistical Analysis