Descriptor
Difficulty Level | 8 |
Higher Education | 8 |
Test Length | 8 |
Test Items | 7 |
Test Construction | 3 |
College Students | 2 |
Comparative Analysis | 2 |
Item Analysis | 2 |
Item Banks | 2 |
Minimum Competency Testing | 2 |
Multiple Choice Tests | 2 |
More ▼ |
Author
Publication Type
Reports - Research | 7 |
Speeches/Meeting Papers | 4 |
Journal Articles | 3 |
Reports - Evaluative | 1 |
Education Level
Audience
Researchers | 2 |
Location
Australia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
New Jersey College Basic… | 1 |
What Works Clearinghouse Rating
Clements, Andrea D.; Rothenberg, Lori – Research in the Schools, 1996
Undergraduate psychology examinations from 48 schools were analyzed to determine the proportion of items at each level of Bloom's Taxonomy, item format, and test length. Analyses indicated significant relationships between item complexity and test length even when taking format into account. Use of higher items may be related to shorter tests,…
Descriptors: Classification, Difficulty Level, Educational Objectives, Higher Education
Catts, Ralph – 1978
The reliability of multiple choice tests--containing different numbers of response options--was investigated for 260 students enrolled in technical college economics courses. Four test forms, constructed from previously used four-option items, were administered, consisting of (1) 60 two-option items--two distractors randomly discarded; (2) 40…
Descriptors: Answer Sheets, Difficulty Level, Foreign Countries, Higher Education
Oosterhof, Albert C.; Coats, Pamela K. – 1981
Instructors who develop classroom examinations that require students to provide a numerical response to a mathematical problem are often very concerned about the appropriateness of the multiple-choice format. The present study augments previous research relevant to this concern by comparing the difficulty and reliability of multiple-choice and…
Descriptors: Comparative Analysis, Difficulty Level, Grading, Higher Education
Hambleton, Ronald K.; And Others – 1987
The study compared two promising item response theory (IRT) item-selection methods, optimal and content-optimal, with two non-IRT item selection methods, random and classical, for use in fixed-length certification exams. The four methods were used to construct 20-item exams from a pool of approximately 250 items taken from a 1985 certification…
Descriptors: Comparative Analysis, Content Validity, Cutting Scores, Difficulty Level
Livingston, Samuel A. – 1987
The effect of increased writing or planning time on a test of basic college level writing ability was studied. The essay portion of the New Jersey College Basic Skills Placement Test was given to students in nine New Jersey public colleges and three New Jersey public high schools. Each student wrote two essays on two different topics. The first…
Descriptors: Academic Ability, Difficulty Level, Essay Tests, High Schools

Plake, Barbara S.; Melican, Gerald J. – Educational and Psychological Measurement, 1989
The impact of overall test length and difficulty on the expert judgments of item performance by the Nedelsky method were studied. Five university-level instructors predicting the performance of minimally competent candidates on a mathematics examination were fairly consistent in their assessments regardless of length or difficulty of the test.…
Descriptors: Difficulty Level, Estimation (Mathematics), Evaluators, Higher Education

Bergstrom, Betty A.; And Others – Applied Measurement in Education, 1992
Effects of altering test difficulty on examinee ability measures and test length in a computer adaptive test were studied for 225 medical technology students in 3 test difficulty conditions. Results suggest that, with an item pool of sufficient depth and breadth, acceptable targeting to test difficulty is possible. (SLD)
Descriptors: Ability, Adaptive Testing, Change, College Students
Hisama, Kay K.; And Others – 1977
The optimal test length, using predictive validity as a criterion, depends on two major conditions: the appropriate item-difficulty rather than the total number of items, and the method used in scoring the test. These conclusions were reached when responses to a 100-item multi-level test of reading comprehension from 136 non-native speakers of…
Descriptors: College Students, Difficulty Level, English (Second Language), Foreign Students