Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 7 |
Descriptor
Models | 14 |
Selection | 14 |
Test Items | 14 |
Item Response Theory | 8 |
Computer Assisted Testing | 7 |
Adaptive Testing | 6 |
Test Construction | 6 |
Simulation | 5 |
Accuracy | 3 |
Algorithms | 3 |
Classification | 3 |
More ▼ |
Source
Educational and Psychological… | 3 |
Applied Psychological… | 2 |
International Journal of… | 1 |
Journal of Educational… | 1 |
Journal of Educational and… | 1 |
Author
Wang, Wen-Chung | 3 |
Weiss, David J. | 2 |
Baghaei, Purya | 1 |
Berger, Martijn P. F. | 1 |
Bradlow, Eric T. | 1 |
Chang, Wanchen | 1 |
Chen, Shu-Ying | 1 |
Dodd, Barbara G. | 1 |
Fan, Xitao | 1 |
Hsu, Chia-Ling | 1 |
Jin, Kuan-Yu | 1 |
More ▼ |
Publication Type
Reports - Research | 10 |
Journal Articles | 8 |
Reports - Evaluative | 4 |
Opinion Papers | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Secondary Education | 1 |
Grade 12 | 1 |
Grade 4 | 1 |
Grade 8 | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement… | 1 |
National Assessment of… | 1 |
Texas Assessment of Academic… | 1 |
What Works Clearinghouse Rating
Ravand, Hamdollah; Baghaei, Purya – International Journal of Testing, 2020
More than three decades after their introduction, diagnostic classification models (DCM) do not seem to have been implemented in educational systems for the purposes they were devised. Most DCM research is either methodological for model development and refinement or retrofitting to existing nondiagnostic tests and, in the latter case, basically…
Descriptors: Classification, Models, Diagnostic Tests, Test Construction
Kopf, Julia; Zeileis, Achim; Strobl, Carolin – Educational and Psychological Measurement, 2015
Differential item functioning (DIF) indicates the violation of the invariance assumption, for instance, in models based on item response theory (IRT). For item-wise DIF analysis using IRT, a common metric for the item parameters of the groups that are to be compared (e.g., for the reference and the focal group) is necessary. In the Rasch model,…
Descriptors: Test Items, Equated Scores, Test Bias, Item Response Theory
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2015
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm…
Descriptors: Computer Assisted Testing, Adaptive Testing, Accuracy, Fidelity
Hsu, Chia-Ling; Wang, Wen-Chung; Chen, Shu-Ying – Applied Psychological Measurement, 2013
Interest in developing computerized adaptive testing (CAT) under cognitive diagnosis models (CDMs) has increased recently. CAT algorithms that use a fixed-length termination rule frequently lead to different degrees of measurement precision for different examinees. Fixed precision, in which the examinees receive the same degree of measurement…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Diagnostic Tests
Wang, Wen-Chung; Jin, Kuan-Yu; Qiu, Xue-Lan; Wang, Lei – Journal of Educational Measurement, 2012
In some tests, examinees are required to choose a fixed number of items from a set of given items to answer. This practice creates a challenge to standard item response models, because more capable examinees may have an advantage by making wiser choices. In this study, we developed a new class of item response models to account for the choice…
Descriptors: Item Response Theory, Test Items, Selection, Models
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G. – Applied Psychological Measurement, 2012
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
Descriptors: Item Response Theory, Models, Selection, Criteria
Wang, Wen-Chung; Liu, Chen-Wei – Educational and Psychological Measurement, 2011
The generalized graded unfolding model (GGUM) has been recently developed to describe item responses to Likert items (agree-disagree) in attitude measurement. In this study, the authors (a) developed two item selection methods in computerized classification testing under the GGUM, the current estimate/ability confidence interval method and the cut…
Descriptors: Computer Assisted Testing, Adaptive Testing, Classification, Item Response Theory

Bradlow, Eric T.; Thomas, Neal – Journal of Educational and Behavioral Statistics, 1998
A set of conditions is presented for the validity of inference for Item Response Theory (IRT) models applied to data collected from examinations that allow students to choose a subset of items. Common low-dimensional IRT models estimated by standard methods do not resolve the difficult problems posed by choice-based data. (SLD)
Descriptors: Inferences, Item Response Theory, Models, Selection
van der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L. – 1998
An item-selection algorithm to neutralize the differential effects of time limits on scores on computerized adaptive tests is proposed. The method is based on a statistical model for the response-time distributions of the examinees on items in the pool that is updated each time a new item has been administered. Predictions from the model are used…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Foreign Countries
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
Items with the highest discrimination parameter values in a logistic item response theory (IRT) model do not necessarily give maximum information. This paper shows which discrimination parameter values (as a function of the guessing parameter and the distance between person ability and item difficulty) give maximum information for the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
van der Linden, Wim J., Ed. – 1987
Four discussions of test construction based on item response theory (IRT) are presented. The first discussion, "Test Design as Model Building in Mathematical Programming" (T. J. J. M. Theunissen), presents test design as a decision process under certainty. A natural way of modeling this process leads to mathematical programming. General…
Descriptors: Algorithms, Computer Assisted Testing, Decision Making, Foreign Countries
Wainer, Howard; And Others – 1991
When an examination consists, in whole or in part, of constructed response items, it is a common practice to allow the examinee to choose among a variety of questions. This procedure is usually adopted so that the limited number of items that can be completed in the allotted time does not unfairly affect the examinee. This results in the de facto…
Descriptors: Adaptive Testing, Chemistry, Comparative Analysis, Computer Assisted Testing
Fan, Xitao; And Others – 1994
The hypothesis that faulty classical psychometric and sampling procedures in test construction could generate systematic bias against ethnic groups with smaller representation in the test construction sample was studied empirically. Two test construction models were developed: one with differential representation of ethnic groups (White, African…
Descriptors: Ethnic Groups, Genetics, High School Students, High Schools
Pine, Steven M.; Weiss, David J. – 1976
This report examines how selection fairness is influenced by the item characteristics of a selection instrument in terms of its distribution of item difficulties, level of item discrimination, and degree of item bias. Computer simulation was used in the administration of conventional ability tests to a hypothetical target population consisting of…
Descriptors: Aptitude Tests, Bias, Computer Programs, Culture Fair Tests