NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Garg, Rashmi; And Others – Journal of Educational Measurement, 1986
For the purpose of obtaining data to use in test development, multiple matrix sampling plans were compared to examinee sampling plans. Data were simulated for examinees, sampled from a population with a normal distribution of ability, responding to items selected from an item universe. (Author/LMO)
Descriptors: Difficulty Level, Monte Carlo Methods, Sampling, Statistical Studies
Peer reviewed Peer reviewed
Mehrens, William A.; Phillips, S. E. – Journal of Educational Measurement, 1987
A taxonomic matrix classification was used to assess the curricular validity of the Stanford Achievement Tests for the mathematics textbooks used in a school district's fifth and sixth grades. Rasch item difficulty was also examined. Results indicated only small differences between textbooks. (GDC)
Descriptors: Difficulty Level, Elementary School Mathematics, Intermediate Grades, Item Analysis
Peer reviewed Peer reviewed
Forsyth, Robert A.; Spratt, Kevin F. – Journal of Educational Measurement, 1980
The effects of two item formats on item difficulty and item discrimination indices for mathematics problem solving multiple-choice tests were investigated. One format required identifying the proper "set-up" for the item; the other format required complete solving of the item. (Author/JKS)
Descriptors: Difficulty Level, Junior High Schools, Multiple Choice Tests, Problem Solving
Peer reviewed Peer reviewed
Kirsch, Irwin S. – Journal of Educational Measurement, 1980
The construct validity of reading comprehension test items was studied in a two-stage process. Five characteristics of task difficulty were defined and a heterogeneous set of 52 items were rated for these characteristics. Then correlations were obtained between ratings and item difficulty data. (CTM)
Descriptors: Adults, Cognitive Processes, Difficulty Level, Evaluation Criteria