NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Parshall, Cynthia G.; Kromrey, Jeffrey D.; Chason, Walter M. – 1996
The benefits of item response theory (IRT) will only accrue to a testing program to the extent that model assumptions are met. Obtaining accurate item parameter estimates is a critical first step. However, the sample sizes required for stable parameter estimation are often difficult to obtain in practice, particularly for the more complex models.…
Descriptors: Comparative Analysis, Estimation (Mathematics), Item Response Theory, Models
Doolittle, Allen E. – 1983
The stability of selected indices for detecting differential item performance (item bias), from one randomly equivalent sample to another, is addressed. Some recent research has criticized these indices as too unreliable for utility in measuring bias in achievement test items. Using data from a national testing of the ACT Assessment, however, this…
Descriptors: Black Students, Item Analysis, Racial Factors, Reliability
Reckase, Mark D. – 1996
The American College Testing Program (ACT) is field testing a portfolio assessment model. The field test is designed to determine whether it is possible to implement a portfolio assessment model on a national level that will result in scores that are of sufficient reliability and validity that they can be used for decisions at the student level.…
Descriptors: College Entrance Examinations, Cooperation, Field Tests, High Schools
Nichols, Teresa M.; And Others – 1994
Validity evidence was determined for an instrument used at a state university to measure student perception of the institutionally stressed importance of various professional traits and his/her performance of these traits. Subjects were 87 preservice teachers at the end of student teaching. Scores from the institutionally stressed importance…
Descriptors: Correlation, Evaluation Methods, Higher Education, Measurement Techniques