NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 30 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Brian C. Leventhal; Dena Pastor – Educational and Psychological Measurement, 2024
Low-stakes test performance commonly reflects examinee ability and effort. Examinees exhibiting low effort may be identified through rapid guessing behavior throughout an assessment. There has been a plethora of methods proposed to adjust scores once rapid guesses have been identified, but these have been plagued by strong assumptions or the…
Descriptors: College Students, Guessing (Tests), Multiple Choice Tests, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Deribo, Tobias; Goldhammer, Frank; Kroehne, Ulf – Educational and Psychological Measurement, 2023
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a…
Descriptors: Reaction Time, Guessing (Tests), Behavior Patterns, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Paulhus, Delroy L.; Dubois, Patrick J. – Educational and Psychological Measurement, 2014
The overclaiming technique is a novel assessment procedure that uses signal detection analysis to generate indices of knowledge accuracy (OC-accuracy) and self-enhancement (OC-bias). The technique has previously shown robustness over varied knowledge domains as well as low reactivity across administration contexts. Here we compared the OC-accuracy…
Descriptors: Educational Assessment, Knowledge Level, Accuracy, Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Kobrin, Jennifer L.; Kim, YoungKoung; Sackett, Paul R. – Educational and Psychological Measurement, 2012
There is much debate on the merits and pitfalls of standardized tests for college admission, with questions regarding the format (multiple-choice vs. constructed response), cognitive complexity, and content of these assessments (achievement vs. aptitude) at the forefront of the discussion. This study addressed these questions by investigating the…
Descriptors: Grade Point Average, Standardized Tests, Predictive Validity, Predictor Variables
Peer reviewed Peer reviewed
Woehr, David J.; And Others – Educational and Psychological Measurement, 1991
Methods for setting cutoff scores based on criterion performance, normative comparison, and absolute judgment were compared for scores on a multiple-choice psychology examination for 121 undergraduates and 251 undergraduates as a comparison group. All methods fell within the standard error of measurement. Implications of differences for decision…
Descriptors: Comparative Analysis, Concurrent Validity, Content Validity, Cutting Scores
Peer reviewed Peer reviewed
Holmes, Roy A.; And Others – Educational and Psychological Measurement, 1974
Descriptors: Chemistry, Multiple Choice Tests, Scoring Formulas, Test Reliability
Peer reviewed Peer reviewed
Krus, David J.; Ney, Robert G. – Educational and Psychological Measurement, 1978
An algorithm for item analysis in which item discrimination indices have been defined for the distractors as well as the correct answer is presented. Also, the concept of convergent and discriminant validity is applied to items instead of tests, and is discussed as an aid to item analysis. (Author/JKS)
Descriptors: Algorithms, Item Analysis, Multiple Choice Tests, Test Items
Peer reviewed Peer reviewed
Dyck, Walter; Plancke-Schuyten, Gilberte – Educational and Psychological Measurement, 1976
Previous knowledge of the difficulty index and the intercorrelations of the items will allow group results to be predicted and manipulated. A compound bionomial probability function of a testscore is established for which a computer program has been written. Three item selections and the appropriate probability distributions are given which give…
Descriptors: Computer Programs, Multiple Choice Tests, Prediction, Probability
Peer reviewed Peer reviewed
Waters, Carrie Wherry; Waters, Lawrence K. – Educational and Psychological Measurement, 1971
Descriptors: Guessing (Tests), Multiple Choice Tests, Response Style (Tests), Scoring Formulas
Peer reviewed Peer reviewed
Owen, Steven V.; Froman, Robin D. – Educational and Psychological Measurement, 1987
To test further for efficacy of three-option achievement items, parallel three- and five-option item tests were distributed randomly to college students. Results showed no differences in mean item difficulty, mean discrimination or total test score, but a substantial reduction in time spent on three-option items. (Author/BS)
Descriptors: Achievement Tests, Higher Education, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Bajtelsmit, John W. – Educational and Psychological Measurement, 1979
A validational procedure was used, which involved a matrix of intercorrelations among tests reresenting four areas of Chartered Life Underwriter content knowledge, each measured by objective multiple-choice and essay methods. Results indicated that the two methods of measuring the same trait yielded fairly consistent estimates of content…
Descriptors: Essay Tests, Higher Education, Insurance Occupations, Multiple Choice Tests
Peer reviewed Peer reviewed
Hanna, Gerald S.; Oaster, Thomas R. – Educational and Psychological Measurement, 1980
Certain kinds of multiple-choice reading comprehension questions may be answered correctly at the higher-than-chance level when they are administered without the accompanying passage. These high risk questions do not necessarily lead to passage dependence invalidity. They threaten but do not prove invalidity. (Author/CP)
Descriptors: High Schools, Multiple Choice Tests, Reading Comprehension, Reading Tests
Peer reviewed Peer reviewed
Cizek, Gregory J.; Robinson, K. Lynne; O'Day, Denis M. – Educational and Psychological Measurement, 1998
The effect of removing nonfunctioning items from multiple-choice tests was studied by examining change in difficulty, discrimination, and dimensionality. Results provide additional support for the benefits of eliminating nonfunctioning options, such as enhanced score reliability, reduced testing time, potential for broader domain sampling, and…
Descriptors: Difficulty Level, Multiple Choice Tests, Sampling, Scores
Peer reviewed Peer reviewed
Suinn, Richard M.; And Others – Educational and Psychological Measurement, 1987
The Suinn-Lew Asian Self Identity Acculturation Scale (SL-ASIA) is modeled after a successful scale for Hispanics. Initial reliability and validity data are reported for two samples of Asian subjects from two states. (Author/BS)
Descriptors: Acculturation, Asian Americans, Higher Education, Identification (Psychology)
Peer reviewed Peer reviewed
Cross, Lawrence H.; Frary, Robert B. – Educational and Psychological Measurement, 1978
The reliability and validity of multiple choice test scores resutling from empirical choice-weighting of alternatives was examined under two conditions: (1) examinees were told not to guess unless choices could be eliminated; and (2) examinees were told the total score would be the total number correct. Results favored the choice-weighting…
Descriptors: Guessing (Tests), Higher Education, Multiple Choice Tests, Response Style (Tests)
Previous Page | Next Page ยป
Pages: 1  |  2