NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
Marlowe Crowne Social…1
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wim J. van der Linden; Luping Niu; Seung W. Choi – Journal of Educational and Behavioral Statistics, 2024
A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint…
Descriptors: Adaptive Testing, Test Construction, Test Format, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2011
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method against the oral examination (OE) method. MCQs are widely used and their importance seems likely to grow, due to their inherent suitability for electronic assessment. However, MCQs are influenced by the tendency of examinees to guess…
Descriptors: Grades (Scholastic), Scoring, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Hardre, Patricia L.; Crowson, H. Michael; Xie, Kui – Journal of Educational Computing Research, 2010
Questionnaire instruments are routinely translated to digital administration systems; however, few studies have compared the differential effects of these administrative methods, and fewer yet in authentic contexts-of-use. In this study, 326 university students were randomly assigned to one of two administration conditions, paper-based (PBA) or…
Descriptors: Internet, Computer Assisted Testing, Questionnaires, College Students
Peer reviewed Peer reviewed
Millstein, Susan G. – Educational and Psychological Measurement, 1987
This study examined response bias in 108 female adolescents randomly assigned to one of three groups: (1) interactive computer interview; (2) face-to-face interview, or (3) self-administered questionnaire. Results showed no significant group differences on reports of sexual behavior, substance use or symptomatology. (Author/BS)
Descriptors: Adolescents, Affective Behavior, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Kolstad, Rosemarie K.; Kolstad, Robert A. – Educational Research Quarterly, 1989
The effect on examinee performance of the rule that multiple-choice (MC) test items require the acceptance of 1 choice was examined for 106 dental students presented with choices in MC and multiple true-false formats. MC items force examinees to select one choice, which causes artificial acceptance of correct/incorrect choices. (SLD)
Descriptors: Comparative Testing, Dental Students, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Crino, Michael D.; And Others – Educational and Psychological Measurement, 1985
The random response technique was compared to a direct questionnaire, administered to college students, to investigate whether or not the responses predicted the social desirability of the item. Results suggest support for the hypothesis. A 33-item version of the Marlowe-Crowne Social Desirability Scale which was used is included. (GDC)
Descriptors: Comparative Testing, Confidentiality, Higher Education, Item Analysis
Heller, Eric S.; Rife, Frank N. – 1987
The goal of this study was to assess the relative merit of various ranges and types of response scales in terms of respondent satisfaction and comfort and the nature of the elicited information in a population of seventh grade students. Three versions of an attitudinal questionnaire, each containing the same items but employing a different…
Descriptors: Attitude Measures, Comparative Testing, Grade 7, Junior High Schools
Shavelson, Richard J.; And Others – 1988
This study investigated the relationships among the symbolic representation of problems given to students to solve, the mental representations they use to solve the problems, and the accuracy of their solutions. Twenty eleventh-grade science students were asked to think aloud as they solved problems on the ideal gas laws. The problems were…
Descriptors: Chemistry, Comparative Testing, Problem Solving, Response Style (Tests)
Peer reviewed Peer reviewed
Harasym, P. H.; And Others – Evaluation and the Health Professions, 1980
Coded, as opposed to free response items, in a multiple choice physiology test had a cueing effect which raised students' scores, especially for lower achievers. Reliability of coded items was also lower. Item format and scoring method had an effect on test results. (GDC)
Descriptors: Achievement Tests, Comparative Testing, Cues, Higher Education
Chissom, Brad; Chukabarah, Prince C. O. – 1985
The comparative effects of various sequences of test items were examined for over 900 graduate students enrolled in an educational research course at The University of Alabama, Tuscaloosa. experiment, which was conducted a total of four times using four separate tests, presented three different arrangements of 50 multiple-choice items: (1)…
Descriptors: Analysis of Variance, Comparative Testing, Difficulty Level, Graduate Students
Moon, Russ – 1988
Since the emergence of the General Certificate of Secondary Education (GCSE) there have been calls for improved methods of assessing economics. Oral assessment has been suggested as a possible technique and this study investigated whether it might be used to allow students to demonstrate achievement in GCSE economics. The empirical study compared…
Descriptors: Achievement Tests, Comparative Analysis, Comparative Testing, Economics Education
Peer reviewed Peer reviewed
Kinicki, Angelo J.; And Others – Educational and Psychological Measurement, 1985
Using both the Behaviorally Anchored Rating Scales (BARS) and the Purdue University Scales, 727 undergraduates rated 32 instructors. The BARS had less halo effect, more leniency error, and lower interrater reliability. Both formats were valid. The two tests did not differ in rate discrimination or susceptibility to rating bias. (Author/GDC)
Descriptors: Behavior Rating Scales, College Faculty, Comparative Testing, Higher Education
Leitner, Dennis W.; And Others – 1979
To discover factors which contribute to a high response rate for questionnaire surveys, the preferences of 150 college teachers and teaching assistants were studied. Four different questionnaire formats using 34 common items were sent to the subjects: open-ended; Likert-type (five points, from "strong influence to return," to…
Descriptors: Check Lists, College Faculty, Comparative Testing, Higher Education
Gafni, Naomi; Estela, Melamed – 1988
The objective of this study was to investigate differential tendencies to avoid guessing as a function of three variables: (1) lingual-cultural-group; (2) gender; and (3) examination year. The Psychometric Entrance Test (PET) for universities in Israel was used, which is administered in Hebrew, Arabic, English, French, Spanish, and Russian. The…
Descriptors: College Bound Students, College Entrance Examinations, Comparative Testing, Cultural Differences