NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sinharay, Sandip; Holland, Paul – ETS Research Report Series, 2006
It is a widely held belief that anchor tests should be miniature versions (i.e., minitests), with respect to content and statistical characteristics of the tests being equated. This paper examines the foundations for this belief. It examines the requirement of statistical representativeness of anchor tests that are content representative. The…
Descriptors: Test Items, Equated Scores, Evaluation Methods, Difficulty Level
Glas, Cees A. W.; Vos, Hans J. – 1998
A version of sequential mastery testing is studied in which response behavior is modeled by an item response theory (IRT) model. First, a general theoretical framework is sketched that is based on a combination of Bayesian sequential decision theory and item response theory. A discussion follows on how IRT based sequential mastery testing can be…
Descriptors: Adaptive Testing, Bayesian Statistics, Item Response Theory, Mastery Tests
PDF pending restoration PDF pending restoration
Green, Bert F. – 2002
Maximum likelihood and Bayesian estimates of proficiency, typically used in adaptive testing, use item weights that depend on test taker proficiency to estimate test taker proficiency. In this study, several methods were explored through computer simulation using fixed item weights, which depend mainly on the items difficulty. The simpler scores…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Computer Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Jinming – ETS Research Report Series, 2005
Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…
Descriptors: Statistical Bias, Maximum Likelihood Statistics, Computation, Ability
Linn, Bob; McLaughlin, Don; Jiang, Tao; Gallagher, Larry – American Institutes for Research, 2004
The purpose of this simulation was to assess the improvements in estimates of standard errors that could be expected if students participating in NAEP were pre-assigned to test booklets that were adapted to their level of performance based on their state assessment scores. Students in extreme quartiles would receive one regular NAEP block and…
Descriptors: Educational Improvement, Educational Assessment, Error of Measurement, Educational Testing
Samejima, Fumiko – 1984
In order to evaluate our methods and approaches of estimating the operating characteristics of discrete item responses, it is necessary to try other comparable methods on similar sets of data. LOGIST 5 was taken up for this reason, and was tried upon the hypothetical test items, which follow the normal ogive model and were used frequently in…
Descriptors: Computer Simulation, Computer Software, Estimation (Mathematics), Item Analysis
Weiss, David J.; Suhadolnik, Debra – 1982
The present monte carlo simulation study was designed to examine the effects of multidimensionality during the administration of computerized adaptive testing (CAT). It was assumed that multidimensionality existed in the individuals to whom test items were being administered, i.e., that the correct or incorrect responses given by an individual…
Descriptors: Adaptive Testing, Computer Assisted Testing, Factor Structure, Latent Trait Theory
Rizavi, Saba; Way, Walter D.; Lu, Ying; Pitoniak, Mary; Steffen, Manfred – Online Submission, 2004
The purpose of this study was to use realistically simulated data to evaluate various CAT designs for use with the verbal reasoning measure of the Medical College Admissions Test (MCAT). Factors such as item pool depth, content constraints, and item formats often cause repeated adaptive administrations of an item at ability levels that are not…
Descriptors: Test Items, Test Bias, Item Banks, College Admission