NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Slinde, Jefferey A.; Linn, Robert L. – Journal of Educational Measurement, 1978
Use of the Rasch model for vertical equating of tests is discussed. Although use of the model is promising, empirical results raise questions about the adequacy of the Rasch model. Latent trait models with more parameters may be necessary. (JKS)
Descriptors: Achievement Tests, Difficulty Level, Equated Scores, Higher Education
Peer reviewed Peer reviewed
Beuchert, A. Kent; Mendoza, Jorge L. – Journal of Educational Measurement, 1979
Ten item discrimination indices, across a variety of item analysis situations, were compared, based on the validities of tests constructed by using each of the indices to select 40 items from a 100-item pool. Item score data were generated by a computer program and included a simulation of guessing. (Author/CTM)
Descriptors: Item Analysis, Simulation, Statistical Analysis, Test Construction
Peer reviewed Peer reviewed
Ebel, Robert L. – Journal of Educational Measurement, 1982
Reasonable and practical solutions to two major problems confronting the developer of any test of educational achievement (what to measure and how to measure it) are proposed, defended, and defined. (Author/PN)
Descriptors: Measurement Techniques, Objective Tests, Test Construction, Test Items
Peer reviewed Peer reviewed
Hambleton, Ronald K.; Eignor, Daniel R. – Journal of Educational Measurement, 1978
A set of guidelines for evaluating criterion-referenced tests is presented. Additionally, 11 sets of extant criterion-referenced tests are evaluated using these guidelines. (JKS)
Descriptors: Achievement Tests, Criterion Referenced Tests, Evaluation Criteria, Guidelines
Peer reviewed Peer reviewed
Leinhardt, Gaea; Seewald, Andrea Mar – Journal of Educational Measurement, 1981
In studying the effectiveness of different instructional programs, a criterion measure can favor one of the programs because there is greater overlap between the content covered on the test and in that program. This overlap can be measured using teacher estimates or teacher estimates combined with curriculum analysis. (Author/BW)
Descriptors: Criterion Referenced Tests, Curriculum, Elementary School Mathematics, Learning Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Shu-Ying; Ankenman, Robert D. – Journal of Educational Measurement, 2004
The purpose of this study was to compare the effects of four item selection rules--(1) Fisher information (F), (2) Fisher information with a posterior distribution (FP), (3) Kullback-Leibler information with a posterior distribution (KP), and (4) completely randomized item selection (RN)--with respect to the precision of trait estimation and the…
Descriptors: Test Length, Adaptive Testing, Computer Assisted Testing, Test Selection
Peer reviewed Peer reviewed
Yoshida, Roland K. – Journal of Educational Measurement, 1976
Teacher-selected test level resulted in: (1) most Educable Mentally Retarded students tested responding above chance levels on all subtests of the Metropolitan Achievement Test, (2) reliability coefficients comparable to those of the standardization sample, and (3) moderate to high positive point-biserial correlations for all subtest-level…
Descriptors: Achievement Tests, Age Grade Placement, Elementary Secondary Education, Item Analysis
Peer reviewed Peer reviewed
Plake, Barbara S.; Hoover, H. D. – Journal of Educational Measurement, 1979
An experiment investigated the extent to which the results of out-of-level testing may be biased because the child given an out of level test may have had a significantly different curriculum than the children given in-level tests. Item analysis data suggested this was unlikely. (CTM)
Descriptors: Achievement Tests, Elementary Education, Elementary School Curriculum, Grade Equivalent Scores