NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ebru Dogruöz; Hülya Kelecioglu – International Journal of Assessment Tools in Education, 2024
In this research, multistage adaptive tests (MST) were compared according to sample size, panel pattern and module length for top-down and bottom-up test assembly methods. Within the scope of the research, data from PISA 2015 were used and simulation studies were conducted according to the parameters estimated from these data. Analysis results for…
Descriptors: Adaptive Testing, Test Construction, Foreign Countries, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Hyeon-Ah; Zheng, Yi; Chang, Hua-Hua – Journal of Educational and Behavioral Statistics, 2020
With the widespread use of computers in modern assessment, online calibration has become increasingly popular as a way of replenishing an item pool. The present study discusses online calibration strategies for a joint model of responses and response times. The study proposes likelihood inference methods for item paramter estimation and evaluates…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Ren, Hao – Journal of Educational and Behavioral Statistics, 2020
The Bayesian way of accounting for the effects of error in the ability and item parameters in adaptive testing is through the joint posterior distribution of all parameters. An optimized Markov chain Monte Carlo algorithm for adaptive testing is presented, which samples this distribution in real time to score the examinee's ability and optimally…
Descriptors: Bayesian Statistics, Adaptive Testing, Error of Measurement, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Cappaert, Kevin J.; Wen, Yao; Chang, Yu-Feng – Measurement: Interdisciplinary Research and Perspectives, 2018
Events such as curriculum changes or practice effects can lead to item parameter drift (IPD) in computer adaptive testing (CAT). The current investigation introduced a point- and weight-adjusted D[superscript 2] method for IPD detection for use in a CAT environment when items are suspected of drifting across test administrations. Type I error and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Identification
OECD Publishing, 2013
The Programme for the International Assessment of Adult Competencies (PIAAC) has been planned as an ongoing program of assessment. The first cycle of the assessment has involved two "rounds." The first round, which is covered by this report, took place over the period of January 2008-October 2013. The main features of the first cycle of…
Descriptors: International Assessment, Adults, Skills, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Rudner, Lawrence M.; Guo, Fanmin – Journal of Applied Testing Technology, 2011
This study investigates measurement decision theory (MDT) as an underlying model for computer adaptive testing when the goal is to classify examinees into one of a finite number of groups. The first analysis compares MDT with a popular item response theory model and finds little difference in terms of the percentage of correct classifications. The…
Descriptors: Adaptive Testing, Instructional Systems, Item Response Theory, Computer Assisted Testing
Evans, Josiah Jeremiah – ProQuest LLC, 2010
In measurement research, data simulations are a commonly used analytical technique. While simulation designs have many benefits, it is unclear if these artificially generated datasets are able to accurately capture real examinee item response behaviors. This potential lack of comparability may have important implications for administration of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Educational Testing, Admission (School)
Peer reviewed Peer reviewed
Direct linkDirect link
Belov, Dmitry I.; Armstrong, Ronald D. – Applied Psychological Measurement, 2008
This article presents an application of Monte Carlo methods for developing and assembling multistage adaptive tests (MSTs). A major advantage of the Monte Carlo assembly over other approaches (e.g., integer programming or enumerative heuristics) is that it provides a uniform sampling from all MSTs (or MST paths) available from a given item pool.…
Descriptors: Monte Carlo Methods, Adaptive Testing, Sampling, Item Response Theory
Peer reviewed Peer reviewed
Bradlow, Eric T. – Journal of Educational and Behavioral Statistics, 1996
The three-parameter logistic (3-PL) model is described and a derivation of the 3-PL observed information function is presented for a single binary response from one examinee with known item parameters. Formulas are presented for the probability of negative information and for the expected information (always nonnegative). (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Raiche, Gilles; Blais, Jean-Guy – Applied Psychological Measurement, 2006
Monte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their…
Descriptors: Computer Assisted Instruction, Computer Software, Sampling, Adaptive Testing
Blais, Jean-Guy; Raiche, Gilles – 2002
This paper examines some characteristics of the statistics associated with the sampling distribution of the proficiency level estimate when the Rasch model is used. These characteristics allow the judgment of the meaning to be given to the proficiency level estimate obtained in adaptive testing, and as a consequence, they can illustrate the…
Descriptors: Ability, Adaptive Testing, Error of Measurement, Estimation (Mathematics)
Zwick, Rebecca – 1995
This paper describes a study, now in progress, of new methods for representing the sampling variability of Mantel-Haenszel differential item functioning (DIF) results, based on the system for categorizing the severity of DIF that is now in place at the Educational Testing Service. The methods, which involve a Bayesian elaboration of procedures…
Descriptors: Adaptive Testing, Bayesian Statistics, Classification, Computer Assisted Testing
Peer reviewed Peer reviewed
Haladyna, Thomas M.; Roid, Gale H. – Journal of Educational Measurement, 1983
The present study showed that Rasch-based adaptive tests--when item domains were finite and specifiable--had greater precision in domain score estimation than test forms created by random sampling of items. Results were replicated across four data sources representing a variety of criterion-referenced, domain-based tests varying in length.…
Descriptors: Adaptive Testing, Criterion Referenced Tests, Error of Measurement, Estimation (Mathematics)
Lord, Frederic M. – 1971
Some stochastic approximation procedures are considered in relation to the problem of choosing a sequence of test questions to accurately estimate a given examinee's standing on a psychological dimension. Illustrations are given evaluating certain procedures in a specific context. (Author/CK)
Descriptors: Academic Ability, Adaptive Testing, Computer Programs, Difficulty Level
Reckase, Mark D. – 1977
Latent trait model calibration procedures were used on data obtained from a group testing program. The one-parameter model of Wright and Panchapakesan and the three-parameter logistic model of Wingersky, Wood, and Lord were selected for comparison. These models and their corresponding estimation procedures were compared, using actual and simulated…
Descriptors: Achievement Tests, Adaptive Testing, Aptitude Tests, Comparative Analysis