NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Taylor, Catherine S.; Lee, Yoonsun – Applied Measurement in Education, 2010
Item response theory (IRT) methods are generally used to create score scales for large-scale tests. Research has shown that IRT scales are stable across groups and over time. Most studies have focused on items that are dichotomously scored. Now Rasch and other IRT models are used to create scales for tests that include polytomously scored items.…
Descriptors: Measures (Individuals), Item Response Theory, Robustness (Statistics), Item Analysis
Peer reviewed Peer reviewed
Wainer, Howard; Thissen, David – Applied Measurement in Education, 1993
Because assessment instruments of the future may well be composed of a combination of types of questions, a way to combine those scores effectively is discussed. Two new graphic tools are presented that show that it may not be practical to equalize the reliability of different components. (SLD)
Descriptors: Constructed Response, Educational Assessment, Graphs, Item Response Theory
Peer reviewed Peer reviewed
Angoff, William H. – Applied Measurement in Education, 1988
Suggestions are provided for future research in item bias detection, reduction of essay-reader variation in setting cut-score levels, and limitations of equating theory. (TJH)
Descriptors: College Entrance Examinations, Cutting Scores, Equated Scores, Essay Tests
Peer reviewed Peer reviewed
Schiel, Jeffrey L.; Shaw, Dale G. – Applied Measurement in Education, 1992
Changes in information retention resulting from changes in reliability and number of intervals in scale construction were studied to provide quantitative information to help in decisions about choosing intervals. Information retention reached a maximum when the number of intervals was about 8 or more and reliability was near 1.0. (SLD)
Descriptors: Decision Making, Knowledge Level, Mathematical Models, Monte Carlo Methods