NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)3
Source
Applied Psychological…4
Publication Type
Journal Articles4
Reports - Descriptive4
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Chiu, Ting-Wei; Camilli, Gregory – Applied Psychological Measurement, 2013
Guessing behavior is an issue discussed widely with regard to multiple choice tests. Its primary effect is on number-correct scores for examinees at lower levels of proficiency. This is a systematic error or bias, which increases observed test scores. Guessing also can inflate random error variance. Correction or adjustment for guessing formulas…
Descriptors: Item Response Theory, Guessing (Tests), Multiple Choice Tests, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Kalender, Ilker – Applied Psychological Measurement, 2012
catcher is a software program designed to compute the [omega] index, a common statistical index for the identification of collusions (cheating) among examinees taking an educational or psychological test. It requires (a) responses and (b) ability estimations of individuals, and (c) item parameters to make computations and outputs the results of…
Descriptors: Computer Software, Computation, Statistical Analysis, Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; DeMars, Christine E. – Applied Psychological Measurement, 2009
Attali (2005) recently demonstrated that Cronbach's coefficient [alpha] estimate of reliability for number-right multiple-choice tests will tend to be deflated by speededness, rather than inflated as is commonly believed and taught. Although the methods, findings, and conclusions of Attali (2005) are correct, his article may inadvertently invite a…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Reliability, Computation
Peer reviewed Peer reviewed
Kim, Jee-Seon; Hanson, Bradley A. – Applied Psychological Measurement, 2002
Presents a characteristic curve procedure for comparing transformations of the item response theory ability scale assuming the multiple-choice model. Illustrates the use of the method with an example equating American College Testing mathematics tests. (SLD)
Descriptors: Ability, Equated Scores, Item Response Theory, Mathematics Tests