NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Samejima, Fumiko; Changas, Paul S. – 1981
The methods and approaches for estimating the operating characteristics of the discrete item responses without assuming any mathematical form have been developed and expanded. It has been made possible that, even if the test information function of a given test is not constant for the interval of ability of interest, it is used as the Old Test.…
Descriptors: Adaptive Testing, Latent Trait Theory, Mathematical Models, Methods
Hwang, Chi-en; Cleary, T. Anne – 1986
The results obtained from two basic types of pre-equatings of tests were compared: the item response theory (IRT) pre-equating and section pre-equating (SPE). The simulated data were generated from a modified three-parameter logistic model with a constant guessing parameter. Responses of two replication samples of 3000 examinees on two 72-item…
Descriptors: Computer Simulation, Equated Scores, Latent Trait Theory, Mathematical Models
Peer reviewed Peer reviewed
Hambleton, Ronald K.; De Gruijter, Dato N. M. – Journal of Educational Measurement, 1983
Addressing the shortcomings of classical item statistics for selecting criterion-referenced test items, this paper describes an optimal item selection procedure utilizing item response theory (IRT) and offers examples in which random selection and optimal item selection methods are compared. Theoretical advantages of optimal selection based upon…
Descriptors: Criterion Referenced Tests, Cutting Scores, Item Banks, Latent Trait Theory
Cohen, Allan S.; Kim, Seock-Ho – 1993
Equating tests from different calibrations under item response theory (IRT) requires calculation of the slope and intercept of the appropriate linear transformation. Two methods have been proposed recently for equating graded response items under IRT, a test characteristic curve method and a minimum chi-square method. These two methods are…
Descriptors: Chi Square, Comparative Analysis, Computer Simulation, Equated Scores
De Ayala, R. J. – 1992
One important and promising application of item response theory (IRT) is computerized adaptive testing (CAT). The implementation of a nominal response model-based CAT (NRCAT) was studied. Item pool characteristics for the NRCAT as well as the comparative performance of the NRCAT and a CAT based on the three-parameter logistic (3PL) model were…
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation
Rudner, Lawrence M. – 1978
Tailored testing provides the same information as group-administered standardized tests, but can do so using fewer items because the items administered are selected for the ability of the individual student. Thus, tailored testing offers several advantages over traditional methods. Because individual tailored tests are not timed, anxiety is…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Samejima, Fumiko – 1986
Item analysis data fitting the normal ogive model were simulated in order to investigate the problems encountered when applying the three-parameter logistic model. Binary item tests containing 10 and 35 items were created, and Monte Carlo methods simulated the responses of 2,000 and 500 examinees. Item parameters were obtained using Logist 5.…
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Item Analysis
McKinley, Robert L.; Reckase, Mark D. – 1981
A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…
Descriptors: Academic Ability, Adaptive Testing, Bayesian Statistics, Comparative Analysis
Wilcox, Rand R. – 1979
Mastery tests are analyzed in terms of the number of skills to be mastered and the number of items per skill, in order that correct decisions of mastery or nonmastery will be made to a desired degree of probability. It is assumed that a random sample of skills will be selected for measurement, that each skill will be measured by the same number of…
Descriptors: Achievement Tests, Cutting Scores, Decision Making, Equivalency Tests
Cliff, Norman; And Others – 1977
TAILOR is a computer program that uses the implied orders concept as the basis for computerized adaptive testing. The basic characteristics of TAILOR, which does not involve pretesting, are reviewed here and two studies of it are reported. One is a Monte Carlo simulation based on the four-parameter Birnbaum model and the other uses a matrix of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Programs, Difficulty Level