NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)3
Education Level
Location
Kuwait1
Texas1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 56 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ravand, Hamdollah; Baghaei, Purya – International Journal of Testing, 2020
More than three decades after their introduction, diagnostic classification models (DCM) do not seem to have been implemented in educational systems for the purposes they were devised. Most DCM research is either methodological for model development and refinement or retrofitting to existing nondiagnostic tests and, in the latter case, basically…
Descriptors: Classification, Models, Diagnostic Tests, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan – Educational and Psychological Measurement, 2012
Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…
Descriptors: Test Items, Selection, Test Construction, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Barrada, Juan Ramon; Olea, Julio; Ponsoda, Vicente; Abad, Francisco Jose – Applied Psychological Measurement, 2010
In a typical study comparing the relative efficiency of two item selection rules in computerized adaptive testing, the common result is that they simultaneously differ in accuracy and security, making it difficult to reach a conclusion on which is the more appropriate rule. This study proposes a strategy to conduct a global comparison of two or…
Descriptors: Test Items, Simulation, Adaptive Testing, Item Analysis
Leung, Chi-Keung; Chang, Hua-Hua; Hau, Kit-Tai – 2001
It is widely believed that item selection methods using the maximum information approach (MI) can maintain high efficiency in trait estimation by repeatedly choosing high discriminating (alpha) items. However, the consequence is that they lead to extremely skewed item exposure distribution in which items with high alpha values becoming overly…
Descriptors: Item Banks, Selection, Test Construction, Test Items
Veldkamp, Bernard P. – 2002
This paper discusses optimal test construction, which deals with the selection of items from a pool to construct a test that performs optimally with respect to the objective of the test and simultaneously meets all test specifications. Optimal test construction problems can be formulated as mathematical decision models. Algorithms and heuristics…
Descriptors: Algorithms, Item Banks, Selection, Test Construction
van der Linden, Wim J. – 1998
Six methods for assembling tests from a pool with an item-set structure are presented. All methods are computational and based on the technique of mixed integer programming. The methods are evaluated using such criteria as the feasibility of their linear programming problems and their expected solution times. The methods are illustrated for two…
Descriptors: Higher Education, Item Banks, Selection, Test Construction
Peer reviewed Peer reviewed
Gierl, Mark J.; Henderson, Diane; Jodoin, Michael; Klinger, Don – Journal of Experimental Education, 2001
Examined the influence of item parameter estimation errors across three item selection methods using the two- and three-parameter logistic item response theory (IRT) model. Tests created with the maximum no target and maximum target item selection procedures consistently overestimated the test information function. Tests created using the theta…
Descriptors: Estimation (Mathematics), Item Response Theory, Selection, Test Construction
Leung, Chi-Keung; Chang, Hua-Hua; Hau, Kit-Tai – 2000
Information based item selection methods in computerized adaptive tests (CATs) tend to choose the item that provides maximum information at an examinee's estimated trait level. As a result, these methods can yield extremely skewed item exposure distributions in which items with high "a" values may be overexposed, while those with low…
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Simulation
Peer reviewed Peer reviewed
Veldkamp, Bernard P. – Applied Psychological Measurement, 2002
Presents two mathematical programming approaches for the assembly of ability tests from item pools calibrated under a multidimensional item response theory model. Item selection is based on the Fisher information matrix. Illustrates the method through empirical examples for a two-dimensional mathematics item pool. (SLD)
Descriptors: Ability, Item Banks, Item Response Theory, Selection
Peer reviewed Peer reviewed
Bradlow, Eric T.; Thomas, Neal – Journal of Educational and Behavioral Statistics, 1998
A set of conditions is presented for the validity of inference for Item Response Theory (IRT) models applied to data collected from examinations that allow students to choose a subset of items. Common low-dimensional IRT models estimated by standard methods do not resolve the difficult problems posed by choice-based data. (SLD)
Descriptors: Inferences, Item Response Theory, Models, Selection
Leung, Chi-Keung; Chang, Hua-Hua; Hau, Kit-Tai – 2000
Item selection methods in computerized adaptive testing (CAT) can yield extremely skewed item exposure distribution in which items with high "a" values may be over-exposed while those with low "a" values may never be selected. H. Chang and Z. Ying (1999) proposed the a-stratified design (ASTR) that attempts to equalize item…
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Test Construction
Peer reviewed Peer reviewed
Chang, Hua-Hua; Zhang, Jinming – Psychometrika, 2002
Demonstrates mathematically that if every item in an item pool has an equal possibility to be selected from the pool in a fixed-length computerized adaptive test, the number of overlapping items among an alpha randomly sampled examinees follows the hypergeometric distribution family for alpha greater than or equal to 1. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Selection
Peer reviewed Peer reviewed
Revuelta, Javier; Ponsoda, Vicente – Journal of Educational Measurement, 1998
Proposes two new methods for item-exposure control, the Progressive method and the Restricted Maximum Information method. Compares both methods with six other item-selection methods. Discusses advantages of the two new methods and the usefulness of combining them. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Selection
Peer reviewed Peer reviewed
Chen, Shu-Ying; Ankenmann, Robert D.; Chang, Hua-Hua – Applied Psychological Measurement, 2000
Compared five item selection rules with respect to the efficiency and precision of trait (theta) estimation at the early stages of computerized adaptive testing (CAT). The Fisher interval information, Fisher information with a posterior distribution, Kullback-Leibler information, and Kullback-Leibler information with a posterior distribution…
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Selection
Peer reviewed Peer reviewed
Meijer, Rob R.; Nering, Michael L. – Applied Psychological Measurement, 1999
Provides an overview of computerized adaptive testing (CAT) and introduces contributions to this special issue. CAT elements discussed include item selection, estimation of the latent trait, item exposure, measurement precision, and item-bank development. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Selection
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4