NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)5
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lahner, Felicitas-Maria; Lörwald, Andrea Carolin; Bauer, Daniel; Nouns, Zineb Miriam; Krebs, René; Guttormsen, Sissel; Fischer, Martin R.; Huwendiek, Sören – Advances in Health Sciences Education, 2018
Multiple true-false (MTF) items are a widely used supplement to the commonly used single-best answer (Type A) multiple choice format. However, an optimal scoring algorithm for MTF items has not yet been established, as existing studies yielded conflicting results. Therefore, this study analyzes two questions: What is the optimal scoring algorithm…
Descriptors: Scoring Formulas, Scoring Rubrics, Objective Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Laprise, Shari L. – College Teaching, 2012
Successful exam composition can be a difficult task. Exams should not only assess student comprehension, but be learning tools in and of themselves. In a biotechnology course delivered to nonmajors at a business college, objective multiple-choice test questions often require students to choose the exception or "not true" choice. Anecdotal student…
Descriptors: Feedback (Response), Test Items, Multiple Choice Tests, Biotechnology
Peer reviewed Peer reviewed
Direct linkDirect link
Taherbhai, Husein; Seo, Daeryong; Bowman, Trinell – British Educational Research Journal, 2012
Literature in the United States provides many examples of no difference in student achievement when measured against the mode of test administration i.e., paper-pencil and online versions of the test. However, most of these researches centre on "regular" students who do not require differential teaching methods or different evaluation…
Descriptors: Learning Disabilities, Statistical Analysis, Teaching Methods, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Walker, Michael E.; McHale, Frederick – Journal of Educational Measurement, 2010
In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…
Descriptors: Measures (Individuals), Scoring, Equated Scores, Test Bias
Walt, Nancy; Atwood, Kristin; Mann, Alex – Journal of Technology, Learning, and Assessment, 2008
The purpose of this study was to determine whether or not survey medium (electronic versus paper format) has a significant effect on the results achieved. To compare survey media, responses from elementary students to British Columbia's Satisfaction Survey were analyzed. Although this study was not experimental in design, the data set served as a…
Descriptors: Student Attitudes, Factor Analysis, Foreign Countries, Elementary School Students
Peer reviewed Peer reviewed
Schuldberg, David – Computers in Human Behavior, 1988
Describes study that investigated the effects of computerized test administration on undergraduates' responses to the Minnesota Multiphasic Personality Inventory (MMPI), and discusses methodological considerations important in evaluating the sensitivity of personality inventories in different administration formats. Results analyze the effects of…
Descriptors: Analysis of Variance, Comparative Testing, Computer Assisted Testing, Higher Education
Peer reviewed Peer reviewed
Crino, Michael D.; And Others – Educational and Psychological Measurement, 1985
The random response technique was compared to a direct questionnaire, administered to college students, to investigate whether or not the responses predicted the social desirability of the item. Results suggest support for the hypothesis. A 33-item version of the Marlowe-Crowne Social Desirability Scale which was used is included. (GDC)
Descriptors: Comparative Testing, Confidentiality, Higher Education, Item Analysis
Peer reviewed Peer reviewed
Crehan, Kevin D.; And Others – Educational and Psychological Measurement, 1993
Studies with 220 college students found that multiple-choice test items with 3 items are more difficult than those with 4 items, and items with the none-of-these option are more difficult than those without this option. Neither format manipulation affected item discrimination. Implications for test construction are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Difficulty Level, Distractors (Tests)
Huntley, Renee M.; And Others – 1990
This study investigated the effect of diagram formats on performance on geometry items in order to determine whether certain examinees are affected by different item formats and whether such differences arise from the different intellectual demands made by these formats. Thirty-two experimental, multiple-choice geometry items were administered in…
Descriptors: College Bound Students, College Entrance Examinations, Comparative Testing, Diagrams
Bethscheider, Janine K. – 1992
Standard and experimental forms of the Johnson O'Connor Research Foundations Analytical Reasoning test were administered to 1,496 clients of the Foundation (persons seeking information about aptitude for educational and career decisions). The objectives were to develop a new form of the test and to better understand what makes some items more…
Descriptors: Adults, Aptitude Tests, Career Choice, Comparative Testing
Chissom, Brad; Chukabarah, Prince C. O. – 1985
The comparative effects of various sequences of test items were examined for over 900 graduate students enrolled in an educational research course at The University of Alabama, Tuscaloosa. experiment, which was conducted a total of four times using four separate tests, presented three different arrangements of 50 multiple-choice items: (1)…
Descriptors: Analysis of Variance, Comparative Testing, Difficulty Level, Graduate Students
Olsen, James B.; And Others – 1986
Student achievement test scores were compared and equated, using three different testing methods: paper-administered, computer-administered, and computerized adaptive testing. The tests were developed from third and sixth grade mathematics item banks of the California Assessment Program. The paper and the computer-administered tests were identical…
Descriptors: Achievement Tests, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Lyon, Mark A.; Smith, Douglas K. – 1986
This study examined agreement rates between identified strengths and weaknesses in shared abilities and influences on the Wechsler Intelligence Scale for Children-Revised (WISC-R) and the Kaufman Assessment Battery for Children (K-ABC). Sixty-seven students in the first through seventh grades referred for learning disabilities (LD) evaluation were…
Descriptors: Ability Identification, Comparative Testing, Concurrent Validity, Elementary Education
Chan, Jason C. – 1990
The importance of the presentation order of items on Likert-type scales was studied. It was proposed that subjects tend to choose the first alternative acceptable to them from among the response categories, so that a primacy effect can be predicted. The effects of reversing the order of the response scale on the latent factor structure underlying…
Descriptors: Comparative Testing, Correlation, Estimation (Mathematics), Factor Analysis
Huntley, Renee M.; Welch, Catherine J. – 1993
Writers of mathematics test items, especially those who write for standardized tests, are often advised to arrange the answer options in logical order, usually ascending or descending numerical order. In this study, 32 mathematics items were selected for inclusion in four experimental pretest units, each consisting of 16 items. Two versions…
Descriptors: Ability, College Entrance Examinations, Comparative Testing, Distractors (Tests)
Previous Page | Next Page »
Pages: 1  |  2