Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 24 |
Descriptor
Computer Assisted Testing | 47 |
Item Response Theory | 47 |
Adaptive Testing | 39 |
Test Items | 26 |
Simulation | 19 |
Item Banks | 14 |
Test Construction | 10 |
Computation | 9 |
Comparative Analysis | 8 |
Models | 8 |
Bayesian Statistics | 7 |
More ▼ |
Source
Applied Psychological… | 47 |
Author
Publication Type
Journal Articles | 47 |
Reports - Research | 21 |
Reports - Evaluative | 19 |
Reports - Descriptive | 5 |
Book/Product Reviews | 1 |
Education Level
High Schools | 2 |
Higher Education | 2 |
Secondary Education | 1 |
Audience
Location
Netherlands | 4 |
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Armed Forces Qualification… | 1 |
California Achievement Tests | 1 |
Center for Epidemiologic… | 1 |
Law School Admission Test | 1 |
What Works Clearinghouse Rating
Wang, Chun; Chang, Hua-Hua; Boughton, Keith A. – Applied Psychological Measurement, 2013
Multidimensional computerized adaptive testing (MCAT) is able to provide a vector of ability estimates for each examinee, which could be used to provide a more informative profile of an examinee's performance. The current literature on MCAT focuses on the fixed-length tests, which can generate less accurate results for those examinees whose…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Length, Item Banks
Choi, Seung W.; Podrabsky, Tracy; McKinney, Natalie – Applied Psychological Measurement, 2012
Computerized adaptive testing (CAT) enables efficient and flexible measurement of latent constructs. The majority of educational and cognitive measurement constructs are based on dichotomous item response theory (IRT) models. An integral part of developing various components of a CAT system is conducting simulations using both known and empirical…
Descriptors: Computer Assisted Testing, Adaptive Testing, Computer Software, Item Response Theory
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G. – Applied Psychological Measurement, 2013
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Descriptors: Test Construction, Test Items, Item Banks, Automation
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien – Applied Psychological Measurement, 2013
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Bayesian Statistics
Yao, Lihua – Applied Psychological Measurement, 2013
Through simulated data, five multidimensional computerized adaptive testing (MCAT) selection procedures with varying test lengths are examined and compared using different stopping rules. Fixed item exposure rates are used for all the items, and the Priority Index (PI) method is used for the content constraints. Two stopping rules, standard error…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Huang, Hung-Yu; Chen, Po-Hsi; Wang, Wen-Chung – Applied Psychological Measurement, 2012
In the human sciences, a common assumption is that latent traits have a hierarchical structure. Higher order item response theory models have been developed to account for this hierarchy. In this study, computerized adaptive testing (CAT) algorithms based on these kinds of models were implemented, and their performance under a variety of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Simulation
Doebler, Anna – Applied Psychological Measurement, 2012
It is shown that deviations of estimated from true values of item difficulty parameters, caused for example by item calibration errors, the neglect of randomness of item difficulty parameters, testlet effects, or rule-based item generation, can lead to systematic bias in point estimation of person parameters in the context of adaptive testing.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computation, Item Response Theory
Finkelman, Matthew D.; Smits, Niels; Kim, Wonsuk; Riley, Barth – Applied Psychological Measurement, 2012
The Center for Epidemiologic Studies-Depression (CES-D) scale is a well-known self-report instrument that is used to measure depressive symptomatology. Respondents who take the full-length version of the CES-D are administered a total of 20 items. This article investigates the use of curtailment and stochastic curtailment (SC), two sequential…
Descriptors: Measures (Individuals), Depression (Psychology), Test Length, Computer Assisted Testing
Green, Bert F. – Applied Psychological Measurement, 2011
This article refutes a recent claim that computer-based tests produce biased scores for very proficient test takers who make mistakes on one or two initial items and that the "bias" can be reduced by using a four-parameter IRT model. Because the same effect occurs with pattern scores on nonadaptive tests, the effect results from IRT scoring, not…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Bias, Item Response Theory
Tendeiro, Jorge N.; Meijer, Rob R. – Applied Psychological Measurement, 2012
This article extends the work by Armstrong and Shi on CUmulative SUM (CUSUM) person-fit methodology. The authors present new theoretical considerations concerning the use of CUSUM person-fit statistics based on likelihood ratios for the purpose of detecting cheating and random guessing by individual test takers. According to the Neyman-Pearson…
Descriptors: Cheating, Individual Testing, Adaptive Testing, Statistics
Yen, Yung-Chin; Ho, Rong-Guey; Laio, Wen-Wei; Chen, Li-Ju; Kuo, Ching-Chin – Applied Psychological Measurement, 2012
In a selected response test, aberrant responses such as careless errors and lucky guesses might cause error in ability estimation because these responses do not actually reflect the knowledge that examinees possess. In a computerized adaptive test (CAT), these aberrant responses could further cause serious estimation error due to dynamic item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Response Style (Tests)
Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K. – Applied Psychological Measurement, 2010
This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…
Descriptors: Simulation, Adaptive Testing, Item Analysis, Item Response Theory
Finkelman, Matthew David – Applied Psychological Measurement, 2010
In sequential mastery testing (SMT), assessment via computer is used to classify examinees into one of two mutually exclusive categories. Unlike paper-and-pencil tests, SMT has the capability to use variable-length stopping rules. One approach to shortening variable-length tests is stochastic curtailment, which halts examination if the probability…
Descriptors: Mastery Tests, Computer Assisted Testing, Adaptive Testing, Test Length
Choi, Seung W.; Swartz, Richard J. – Applied Psychological Measurement, 2009
Item selection is a core component in computerized adaptive testing (CAT). Several studies have evaluated new and classical selection methods; however, the few that have applied such methods to the use of polytomous items have reported conflicting results. To clarify these discrepancies and further investigate selection method properties, six…
Descriptors: Adaptive Testing, Item Analysis, Comparative Analysis, Test Items
Rulison, Kelly L.; Loken, Eric – Applied Psychological Measurement, 2009
A difficult result to interpret in Computerized Adaptive Tests (CATs) occurs when an ability estimate initially drops and then ascends continuously until the test ends, suggesting that the true ability may be higher than implied by the final estimate. This study explains why this asymmetry occurs and shows that early mistakes by high-ability…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Academic Ability