NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Lorié, William A. – Online Submission, 2013
A reverse engineering approach to automatic item generation (AIG) was applied to a figure-based publicly released test item from the Organisation for Economic Cooperation and Development (OECD) Programme for International Student Assessment (PISA) mathematical literacy cognitive instrument as part of a proof of concept. The author created an item…
Descriptors: Numeracy, Mathematical Concepts, Mathematical Logic, Difficulty Level
Peer reviewed Peer reviewed
Slinde, Jeffrey A.; Linn, Robert L. – Journal of Educational Measurement, 1979
The Rasch model was used to equate reading comprehension tests of widely different difficulty for three groups of fifth grade students of widely different ability. Under these extreme circumstances, the Rasch model equating was unsatisfactory. (Author/CTM)
Descriptors: Academic Ability, Bias, Difficulty Level, Equated Scores
Forster, Fred; And Others – 1978
Research on the Rasch model of test and item analysis was applied to tests constructed from item banks for reading and mathematics with respect to five practical problems for scaling items and equating test forms. The questions were: (1) Does the Rasch model yield the same scale value regardless of the student sample? (2) How many students are…
Descriptors: Achievement Tests, Difficulty Level, Elementary Secondary Education, Equated Scores
Douglass, James B. – 1980
The three-, two- and one-parameter (Rasch) logistic item characteristic curve models are compared for use in a large multi-section college course. Only the three-parameter model produced clearly unacceptable parameter estimates for 100 item tests with examinee samples ranging from 594 to 1082. The Rasch and two-parameter models were compared for…
Descriptors: Academic Ability, Achievement Tests, Course Content, Difficulty Level
Engelhard, George, Jr. – 1980
The Rasch model is described as a latent trait model which meets the five criteria that characterize reasonable and objective measurements of an individual's ability independent of the test items used. The criteria are: (1) calibration of test items must be independent of particular norming groups; (2) measurement of individuals must be…
Descriptors: Achievement Tests, Difficulty Level, Elementary Secondary Education, Equated Scores
Kreines, David C.; Mead, Ronald J. – 1979
An explanation is given of what is meant by "sample-free" item calibration and by "item-free" person measurement as these terms are applied to the one-parameter logistic test theory model of Georg Rasch. When the difficulty of an item is calibrated separately for two different samples the results may differ; but, according the…
Descriptors: Difficulty Level, Equated Scores, Goodness of Fit, Item Analysis
Curry, Allen R.; And Others – 1978
The efficacy of employing subsets of items from a calibrated item pool to estimate the Rasch model person parameters was investigated. Specifically, the degree of invariance of Rasch model ability-parameter estimates was examined across differing collections of simulated items. The ability-parameter estimates were obtained from a simulation of…
Descriptors: Career Development, Difficulty Level, Equated Scores, Error of Measurement
Legg, Sue M.; Algina, James – 1986
This paper focuses on the questions which arise as test practitioners monitor score scales derived from latent trait theory. Large scale assessment programs are dynamic and constantly challenge the assumptions and limits of latent trait models. Even though testing programs evolve, test scores must remain reliable indicators of progress.…
Descriptors: Difficulty Level, Educational Assessment, Elementary Secondary Education, Equated Scores
Nassif, Paula M.; And Others – 1979
A procedure which employs a method of item substitution based on item difficulty is recommended for developing parallel criterion referenced test forms. This procedure is currently being used in the Florida functional literacy testing program and the Georgia teacher certification testing program. Reasons for developing parallel test forms involve…
Descriptors: Criterion Referenced Tests, Difficulty Level, Equated Scores, Functional Literacy