NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)2
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Slepkov, Aaron D.; Shiell, Ralph C. – Physical Review Special Topics - Physics Education Research, 2014
Constructed-response (CR) questions are a mainstay of introductory physics textbooks and exams. However, because of the time, cost, and scoring reliability constraints associated with this format, CR questions are being increasingly replaced by multiple-choice (MC) questions in formal exams. The integrated testlet (IT) is a recently developed…
Descriptors: Science Tests, Physics, Responses, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yoder, S. Elizabeth; Kurz, M. Elizabeth – Journal of Education for Business, 2015
Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…
Descriptors: Programming, Educational Practices, Engineering, Engineering Education
Peer reviewed Peer reviewed
Frary, Robert B. – Applied Measurement in Education, 1991
The use of the "none-of-the-above" option (NOTA) in 20 college-level multiple-choice tests was evaluated for classes with 100 or more students. Eight academic disciplines were represented, and 295 NOTA and 724 regular test items were used. It appears that the NOTA can be compatible with good classroom measurement. (TJH)
Descriptors: College Students, Comparative Testing, Difficulty Level, Discriminant Analysis
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Applied Psychological Measurement, 1990
The relationship of an expert-system-scored constrained free-response item type to multiple-choice and free-response items was studied using data for 614 students on the College Board's Advanced Placement Computer Science (APCS) Examination. Implications for testing and the APCS test are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Computer Science
Peer reviewed Peer reviewed
Crehan, Kevin D.; And Others – Educational and Psychological Measurement, 1993
Studies with 220 college students found that multiple-choice test items with 3 items are more difficult than those with 4 items, and items with the none-of-these option are more difficult than those without this option. Neither format manipulation affected item discrimination. Implications for test construction are discussed. (SLD)
Descriptors: College Students, Comparative Testing, Difficulty Level, Distractors (Tests)
Peer reviewed Peer reviewed
Bridgeman, Brent; Rock, Donald A. – Journal of Educational Measurement, 1993
Exploratory and confirmatory factor analyses were used to explore relationships among existing item types and three new computer-administered item types for the analytical scale of the Graduate Record Examination General Test. Results with 349 students indicate constructs the item types are measuring. (SLD)
Descriptors: College Entrance Examinations, College Students, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Ellis, Barbara B. – Intelligence, 1990
Intellectual abilities were measured for 217 German and 205 American college students using tests (in the subjects' native languages) in which equivalence was established by an item-response theory-based differential-item-functioning (DIF) analysis. Comparisons between groups were not the same before and after removal of DIF items. (SLD)
Descriptors: College Students, Comparative Testing, Cross Cultural Studies, Culture Fair Tests
Peer reviewed Peer reviewed
Kim, Seock-Ho; Cohen, Allan S. – Applied Psychological Measurement, 1991
The exact and closed-interval area measures for detecting differential item functioning are compared for actual data from 1,000 African-American and 1,000 white college students taking a vocabulary test with items intentionally constructed to favor 1 set of examinees. No real differences in detection of biased items were found. (SLD)
Descriptors: Black Students, College Students, Comparative Testing, Equations (Mathematics)
Peer reviewed Peer reviewed
Chipman, Susan F.; And Others – American Educational Research Journal, 1991
The effects of problem content on mathematics word problem performance were explored for 128 male and 128 female college students solving problems with masculine, feminine, and neutral (familiar and unfamiliar) cover stories. No effect of sex typing was found, and a small, but highly significant, effect was found for familiarity. (SLD)
Descriptors: College Students, Comparative Testing, Familiarity, Females
Andrada, Gilbert N.; Linden, Kathryn W. – 1993
The psychometric properties of objective tests administered in two testing conditions were compared, using an experimental take-home testing condition and a traditional in-class testing condition. Subjects were 290 college students in a basic educational psychology course who took a test developed and tested the previous semester. Two equivalent…
Descriptors: Class Activities, Classroom Techniques, Cognitive Processes, College Students
Miao, Chang Y.; Kramer, Gene A. – 1992
An approach to detecting differential item functioning using the Rasch model with equivalent-group cross-validation was investigated. College students taking the Dental Admission Test, were divided by gender (936 females and 1,537 males) into 2 different samples. Rasch analyses were performed on both samples. Data were recalibrated after…
Descriptors: College Entrance Examinations, College Students, Comparative Testing, Dental Schools
Peer reviewed Peer reviewed
Bridgeman, Brent – Journal of Educational Measurement, 1992
Examinees in a regular administration of the quantitative portion of the Graduate Record Examination responded to particular items in a machine-scannable multiple-choice format. Volunteers (n=364) used a computer to answer open-ended counterparts of these items. Scores for both formats demonstrated similar correlational patterns. (SLD)
Descriptors: Answer Sheets, College Entrance Examinations, College Students, Comparative Testing
PDF pending restoration PDF pending restoration
Hyers, Albert D.; Anderson, Paul S. – 1991
Using matched pairs of geography questions, a new testing method for machine-scored fill-in-the-blank, multiple-digit testing (MDT) questions was compared to the traditional multiple-choice (MC) style. Data were from 118 matched or parallel test items for 4 tests from 764 college students of geography. The new method produced superior results when…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Ellis, Barbara B.; And Others – Journal of Cross-Cultural Psychology, 1993
Evaluates the measurement equivalence of an English-language version of the Trier Personality Inventory, using statistical methods based on item response theory to identify items displaying differential item functioning (DIF). Results with 295 U.S. and 213 West German undergraduates and 203 U.S. college students indicate significant agreement in…
Descriptors: College Students, Comparative Testing, Cross Cultural Studies, Cultural Differences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Benderson, Albert, Ed. – Focus, 1988
The scores of handicapped students taking tests such as the Scholastic Aptitude Test (SAT) or the Graduate Record Examinations are flagged so that admissions officers will be aware that they were achieved under special circumstances. A series of studies was initiated to determine whether special administrations of such tests are comparable to…
Descriptors: Admission Criteria, College Admission, College Entrance Examinations, College Students
Previous Page | Next Page ยป
Pages: 1  |  2