Descriptor
| Computer Assisted Testing | 2 |
| Difficulty Level | 2 |
| Error of Measurement | 2 |
| Ability | 1 |
| Achievement Tests | 1 |
| Adaptive Testing | 1 |
| Change | 1 |
| College Students | 1 |
| Equated Scores | 1 |
| Higher Education | 1 |
| Item Banks | 1 |
| More ▼ | |
Source
| Applied Measurement in… | 2 |
Publication Type
| Journal Articles | 2 |
| Reports - Evaluative | 1 |
| Reports - Research | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedGreen, Donald Ross; And Others – Applied Measurement in Education, 1989
Potential benefits of using item response theory in test construction are evaluated using the experience and evidence accumulated during nine years of using a three-parameter model in the development of major achievement batteries. Topics addressed include error of measurement, test equating, item bias, and item difficulty. (TJH)
Descriptors: Achievement Tests, Computer Assisted Testing, Difficulty Level, Equated Scores
Peer reviewedBergstrom, Betty A.; And Others – Applied Measurement in Education, 1992
Effects of altering test difficulty on examinee ability measures and test length in a computer adaptive test were studied for 225 medical technology students in 3 test difficulty conditions. Results suggest that, with an item pool of sufficient depth and breadth, acceptable targeting to test difficulty is possible. (SLD)
Descriptors: Ability, Adaptive Testing, Change, College Students


