NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Millstein, Susan G. – Educational and Psychological Measurement, 1987
This study examined response bias in 108 female adolescents randomly assigned to one of three groups: (1) interactive computer interview; (2) face-to-face interview, or (3) self-administered questionnaire. Results showed no significant group differences on reports of sexual behavior, substance use or symptomatology. (Author/BS)
Descriptors: Adolescents, Affective Behavior, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Crino, Michael D.; And Others – Educational and Psychological Measurement, 1985
The random response technique was compared to a direct questionnaire, administered to college students, to investigate whether or not the responses predicted the social desirability of the item. Results suggest support for the hypothesis. A 33-item version of the Marlowe-Crowne Social Desirability Scale which was used is included. (GDC)
Descriptors: Comparative Testing, Confidentiality, Higher Education, Item Analysis
Peer reviewed Peer reviewed
Crehan, Kevin D.; And Others – Educational and Psychological Measurement, 1993
Studies with 220 college students found that multiple-choice test items with 3 items are more difficult than those with 4 items, and items with the none-of-these option are more difficult than those without this option. Neither format manipulation affected item discrimination. Implications for test construction are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Difficulty Level, Distractors (Tests)
Peer reviewed Peer reviewed
Schriesheim, Chester A.; And Others – Educational and Psychological Measurement, 1991
Effects of item wording on questionnaire reliability and validity were studied, using 280 undergraduate business students who completed a questionnaire comprising 4 item types: (1) regular; (2) polar opposite; (3) negated polar opposite; and (4) negated regular. Implications of results favoring regular and negated regular items are discussed. (SLD)
Descriptors: Business Education, Comparative Testing, Higher Education, Negative Forms (Language)
Peer reviewed Peer reviewed
Tzeng, Oliver C. S.; And Others – Educational and Psychological Measurement, 1991
Measurement properties of two response formats (bipolar and unipolar ratings) in personality assessment were compared using data from 135 college students taking the Myers-Briggs Type Indicator (MBTI). Factorial validity and construct validity of the MBTI were supported. Reasons why the bipolar method is preferable are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Construct Validity, Factor Analysis
Peer reviewed Peer reviewed
Kinicki, Angelo J.; And Others – Educational and Psychological Measurement, 1985
Using both the Behaviorally Anchored Rating Scales (BARS) and the Purdue University Scales, 727 undergraduates rated 32 instructors. The BARS had less halo effect, more leniency error, and lower interrater reliability. Both formats were valid. The two tests did not differ in rate discrimination or susceptibility to rating bias. (Author/GDC)
Descriptors: Behavior Rating Scales, College Faculty, Comparative Testing, Higher Education
Peer reviewed Peer reviewed
Trevisan, Michael S.; And Others – Educational and Psychological Measurement, 1991
The reliability and validity of multiple-choice tests were computed as a function of the number of options per item and student ability for 435 parochial high school juniors, who were administered the Washington Pre-College Test Battery. Results suggest the efficacy of the three-option item. (SLD)
Descriptors: Ability, Comparative Testing, Distractors (Tests), Grade Point Average