Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 1 |
Descriptor
Difficulty Level | 13 |
Higher Education | 13 |
Item Banks | 13 |
Test Items | 10 |
Computer Assisted Testing | 6 |
Test Construction | 6 |
College Students | 5 |
Item Analysis | 4 |
Multiple Choice Tests | 4 |
Adaptive Testing | 3 |
Correlation | 3 |
More ▼ |
Source
Applied Measurement in… | 1 |
Educational and Psychological… | 1 |
International Association for… | 1 |
Journal of Educational… | 1 |
Author
Bejar, Isaac I. | 1 |
Bergstrom, Betty A. | 1 |
Dodds, Jeffrey | 1 |
Hampilos, John P. | 1 |
Johnson, Phillip L. | 1 |
Laskaris, George | 1 |
Lutkus, Anthony D. | 1 |
Maihoff, N. A. | 1 |
Mehrens, Wm. A. | 1 |
Melican, Gerald J. | 1 |
O'Brien, Michael | 1 |
More ▼ |
Publication Type
Reports - Research | 10 |
Speeches/Meeting Papers | 8 |
Journal Articles | 3 |
Books | 1 |
Collected Works - General | 1 |
Collected Works - Proceedings | 1 |
Numerical/Quantitative Data | 1 |
Reports - Descriptive | 1 |
Tests/Questionnaires | 1 |
Education Level
Elementary Secondary Education | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Practitioners | 1 |
Researchers | 1 |
Teachers | 1 |
Location
Asia | 1 |
Australia | 1 |
Brazil | 1 |
Connecticut | 1 |
Denmark | 1 |
Egypt | 1 |
Estonia | 1 |
Florida | 1 |
Germany | 1 |
Greece | 1 |
Hawaii | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lutkus, Anthony D.; Laskaris, George – 1981
Analyses of student responses to Introductory Psychology test questions were discussed. The publisher supplied a two thousand item test bank on computer tape. Instructors selected questions for fifteen item tests. The test questions were labeled by the publisher as factual or conceptual. The semester course used a mastery learning format in which…
Descriptors: Difficulty Level, Higher Education, Item Analysis, Item Banks

Richichi, Rudolph V. – 1996
An item analysis study was conducted for two multiple-choice tests of introductory psychology created from a publisher's test bank. A 47-item multiple-choice test was administered to 247 students, and 433 students took a different 42-item multiple-choice test. Participants were from a large northeastern suburban community college. The difficulty…
Descriptors: College Students, Community Colleges, Difficulty Level, Higher Education
Dodds, Jeffrey – 1999
Basic precepts for test development are described and explained as they are presented in measurement textbooks commonly used in the fields of education and psychology. The five building blocks discussed as the foundation of well-constructed tests are: (1) specification of purpose; (2) standard conditions; (3) consistency; (4) validity; and (5)…
Descriptors: Difficulty Level, Educational Research, Grading, Higher Education
Tollefson, Nona; Tripp, Alice – 1983
This study compared the item difficulty and item discrimination of three multiple choice item formats. The multiple choice formats studied were: a complex alternative (none of the above) as the correct answer; a complex alternative as a foil, and the one-correct answer format. One hundred four graduate students were randomly assigned to complete…
Descriptors: Analysis of Variance, Difficulty Level, Graduate Students, Higher Education

Bejar, Isaac I. – Journal of Educational Measurement, 1980
Two procedures are presented for detecting violations of the unidimensionality assumption made by latent trait models without requiring factor analysis of inter-item correlation matrices. Both procedures require that departures from unidimensionality be hypothesized beforehand. This is usually possible in achievement tests where several content…
Descriptors: Achievement Tests, Bayesian Statistics, Cluster Grouping, Content Analysis
Wise, Steven L.; And Others – 1991
According to item response theory (IRT), examinee ability estimation is independent of the particular set of test items administered from a calibrated pool. Although the most popular application of this feature of IRT is computerized adaptive (CA) testing, a recently proposed alternative is self-adapted (SA) testing, in which examinees choose the…
Descriptors: Ability Identification, Adaptive Testing, College Students, Comparative Testing
Johnson, Phillip L.; And Others – 1991
The strategies examinees employ when making item difficulty level choices in self-adapted computerized testing were investigated. Subjects were 148 college students (88 females and 60 males) in an introductory statistics course. The primary instrument was a self-adapted computerized algebra test used to measure student readiness for the statistics…
Descriptors: Adaptive Testing, Algebra, College Students, Computer Assisted Testing
O'Brien, Michael; Hampilos, John P. – 1984
The feasibility of creating an item bank from a teacher-made test was examined in two comparable sections of a graduate-level introductory measurement course. The 67-item midterm examination contained multiple-choice and master matching items, which required higher level cognitive processes such as application and analysis. The feasibility of…
Descriptors: Computer Assisted Testing, Criterion Referenced Tests, Difficulty Level, Higher Education

Plake, Barbara S.; Melican, Gerald J. – Educational and Psychological Measurement, 1989
The impact of overall test length and difficulty on the expert judgments of item performance by the Nedelsky method were studied. Five university-level instructors predicting the performance of minimally competent candidates on a mathematics examination were fairly consistent in their assessments regardless of length or difficulty of the test.…
Descriptors: Difficulty Level, Estimation (Mathematics), Evaluators, Higher Education

Bergstrom, Betty A.; And Others – Applied Measurement in Education, 1992
Effects of altering test difficulty on examinee ability measures and test length in a computer adaptive test were studied for 225 medical technology students in 3 test difficulty conditions. Results suggest that, with an item pool of sufficient depth and breadth, acceptable targeting to test difficulty is possible. (SLD)
Descriptors: Ability, Adaptive Testing, Change, College Students
Maihoff, N. A.; Mehrens, Wm. A. – 1985
A comparison is presented of alternate-choice and true-false item forms used in an undergraduate natural science course. The alternate-choice item is a modified two-choice multiple-choice item in which the two responses are included within the question stem. This study (1) compared the difficulty level, discrimination level, reliability, and…
Descriptors: Classroom Environment, College Freshmen, Comparative Analysis, Comparative Testing
Stansfield, Charles W., Ed. – 1986
This collection of essays on measurement theory and language testing includes: "Computerized Adaptive Testing: Implications for Language Test Developers" (Peter Tung); "The Promise and Threat of Computerized Adaptive Assessment of Reading Comprehension" (Michael Canale); "Computerized Rasch Analysis of Item Bias in ESL…
Descriptors: Chinese, Cloze Procedure, Computer Assisted Testing, Computer Software
International Association for Development of the Information Society, 2012
The IADIS CELDA 2012 Conference intention was to address the main issues concerned with evolving learning processes and supporting pedagogies and applications in the digital age. There had been advances in both cognitive psychology and computing that have affected the educational arena. The convergence of these two disciplines is increasing at a…
Descriptors: Academic Achievement, Academic Persistence, Academic Support Services, Access to Computers