Descriptor
| Comparative Testing | 2 |
| Computer Assisted Testing | 2 |
| Computer Simulation | 2 |
| Test Construction | 2 |
| Adaptive Testing | 1 |
| Elementary Secondary Education | 1 |
| Item Banks | 1 |
| Microcomputers | 1 |
| Problem Solving | 1 |
| Science Tests | 1 |
| Statistical Distributions | 1 |
| More ▼ | |
Source
| Journal of Educational… | 1 |
Author
| Collins, Allan | 1 |
| Wainer, Howard | 1 |
Publication Type
| Journal Articles | 1 |
| Opinion Papers | 1 |
| Reports - Descriptive | 1 |
| Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Collins, Allan; And Others – 1991
The use of paper and pencil, videotape recordings, and microcomputers in student testing provide three very different views of student achievement. Paper and pencil tests can record how students compose tests and documents, and how they critique documents or performances. Video recordings can record how students explain ideas, answer questions,…
Descriptors: Comparative Testing, Computer Assisted Testing, Computer Simulation, Elementary Secondary Education
Peer reviewedWainer, Howard; And Others – Journal of Educational Measurement, 1992
Computer simulations were run to measure the relationship between testlet validity and factors of item pool size and testlet length for both adaptive and linearly constructed testlets. Making a testlet adaptive yields only modest increases in aggregate validity because of the peakedness of the typical proficiency distribution. (Author/SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation


