Descriptor
Difficulty Level | 4 |
Test Items | 4 |
Item Analysis | 3 |
Intermediate Grades | 2 |
Latent Trait Theory | 2 |
Achievement Tests | 1 |
Bayesian Statistics | 1 |
Cognitive Processes | 1 |
College Students | 1 |
Context Effect | 1 |
Deafness | 1 |
More ▼ |
Publication Type
Reports - Research | 4 |
Journal Articles | 2 |
Education Level
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
California Achievement Tests | 4 |
What Works Clearinghouse Rating
Gonzalez-Tamayo, Eulogio – 1984
An item content criterion, independent of the test psychometric characteristics, of classifying items as biased is described. It was used with a sample of female adults in a training program for administrative secretaries. The minority group in the study were Hispanic immigrants. The majority group was a mixture of Blacks, English speaking…
Descriptors: Difficulty Level, Hispanic Americans, Item Analysis, Language Dominance
The Extent, Causes and Importance of Context Effects on Item Parameters for Two Latent Trait Models.

Yen, Wendy M. – Journal of Educational Measurement, 1980
A study of the context effects on item parameters for one- and three-parameter latent trait models showed that: (1) changes in context affected their item difficulties; and (2) context effects were more important in making predictions for single items than for groups of items. (Author/RL)
Descriptors: Achievement Tests, Context Effect, Difficulty Level, Grade 4

Garrison, Wayne; And Others – American Annals of the Deaf, 1992
This study examined characteristics of multiple-choice reading comprehension tasks suspected of influencing their difficulty, through administration of the California Achievement Tests to 158 deaf college students. Problem components evaluated included manifest content, psychologically salient features, and processing demands. Variation in item…
Descriptors: Cognitive Processes, College Students, Deafness, Difficulty Level
Mislevy, Robert J. – 1987
Standard procedures for estimating item parameters in Item Response Theory models make no use of auxiliary information about test items, such as their format or content, or the skills they require for solution. This paper describes a framework for exploiting this information, thereby enhancing the precision and stability of item parameter…
Descriptors: Bayesian Statistics, Difficulty Level, Estimation (Mathematics), Intermediate Grades