Descriptor
Difficulty Level | 10 |
Multiple Choice Tests | 10 |
Test Items | 9 |
Item Analysis | 7 |
Higher Education | 6 |
Test Construction | 4 |
Mathematical Models | 3 |
Scores | 3 |
Test Format | 3 |
Abstract Reasoning | 2 |
Achievement Tests | 2 |
More ▼ |
Source
American Annals of the Deaf | 1 |
Author
Publication Type
Reports - Research | 10 |
Speeches/Meeting Papers | 9 |
Journal Articles | 1 |
Education Level
Audience
Researchers | 10 |
Location
Laws, Policies, & Programs
Assessments and Surveys
California Achievement Tests | 1 |
What Works Clearinghouse Rating
Reid, Jerry B. – 1985
This report investigates an area of uncertainty in using the Angoff method for setting standards, namely whether or not a judge's conceptualizations of borderline group performance are realistic. Ratings are usually made with reference to the performance of this hypothetical group, therefore the Angoff method's success is dependent on this point.…
Descriptors: Certification, Cutting Scores, Difficulty Level, Interrater Reliability

Garrison, Wayne; And Others – American Annals of the Deaf, 1992
This study examined characteristics of multiple-choice reading comprehension tasks suspected of influencing their difficulty, through administration of the California Achievement Tests to 158 deaf college students. Problem components evaluated included manifest content, psychologically salient features, and processing demands. Variation in item…
Descriptors: Cognitive Processes, College Students, Deafness, Difficulty Level
Livingston, Samuel A. – 1986
This paper deals with test fairness regarding a test consisting of two parts: (1) a "common" section, taken by all students; and (2) a "variable" section, in which some students may answer a different set of questions from other students. For example, a test taken by several thousand students each year contains a common multiple-choice portion and…
Descriptors: Difficulty Level, Error of Measurement, Essay Tests, Mathematical Models
Chissom, Brad; Chukabarah, Prince C. O. – 1985
The comparative effects of various sequences of test items were examined for over 900 graduate students enrolled in an educational research course at The University of Alabama, Tuscaloosa. experiment, which was conducted a total of four times using four separate tests, presented three different arrangements of 50 multiple-choice items: (1)…
Descriptors: Analysis of Variance, Comparative Testing, Difficulty Level, Graduate Students
Simpson, Deborah E.; Cohen, Elsa B. – 1985
This paper reports a multi-method approach for examining the cognitive level of multiple-choice items used in a medical pathology course at a large midwestern medical school. Analysis of the standard item analysis data and think-out-loud reports of a sample of students completing a 66 item examination were used to test assumptions related to the…
Descriptors: Abstract Reasoning, Cognitive Objectives, Difficulty Level, Graduate Medical Education
Tanner, David E. – 1986
A multiple choice achievement test was constructed in which both cognitive level and degree of abstractness were controlled. Subjects were 75 students from a major university in the Southwest. A group of 13 judges, also university students, classified the concepts for degree of abstractness. Results indicated that both cognitive level and degree…
Descriptors: Abstract Reasoning, Achievement Tests, Analysis of Variance, Cognitive Processes
Samejima, Fumiko – 1986
Item analysis data fitting the normal ogive model were simulated in order to investigate the problems encountered when applying the three-parameter logistic model. Binary item tests containing 10 and 35 items were created, and Monte Carlo methods simulated the responses of 2,000 and 500 examinees. Item parameters were obtained using Logist 5.…
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Item Analysis
Huntley, Renee M.; Carlson, James E. – 1986
This study compared student performance on language-usage test items presented in two different formats: as discrete sentences and as items embedded in passages. American College Testing (ACT) Program's Assessment experimental units were constructed that presented 40 items in the two different formats. Results suggest item presentation may not…
Descriptors: College Entrance Examinations, Difficulty Level, Goodness of Fit, Item Analysis
Garrido, Mariquita; Payne, David A. – 1987
Minimum competency cut-off scores on a statistics exam were estimated under four conditions: the Angoff judging method with item data (n=20), and without data available (n=19); and the Modified Angoff method with (n=19), and without (n=19) item data available to judges. The Angoff method required free response percentage estimates (0-100) percent,…
Descriptors: Academic Standards, Comparative Analysis, Criterion Referenced Tests, Cutting Scores
Anderson, Paul S.; Kanzler, Eileen M. – 1985
Test scores were compared for two types of objective achievement tests--multiple choice tests and the recently developed Multi-Digit Test (MDT) procedure. MDT is an approximation of the fill-in-the-blank technique. Students select their answers from long lists of alphabetized terms, with each answer corresponding to a number from 001 to 999. The…
Descriptors: Achievement Tests, Cloze Procedure, Comparative Testing, Computer Assisted Testing