Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 4 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 7 |
Descriptor
| Adaptive Testing | 21 |
| Computer Assisted Testing | 21 |
| Sample Size | 21 |
| Simulation | 13 |
| Test Items | 10 |
| Item Response Theory | 9 |
| Item Banks | 8 |
| Estimation (Mathematics) | 5 |
| Ability | 4 |
| Correlation | 4 |
| Statistical Distributions | 4 |
| More ▼ | |
Source
Author
| Chen, Ping | 2 |
| Dodd, Barbara G. | 2 |
| Ito, Kyoko | 2 |
| Nandakumar, Ratna | 2 |
| Roussos, Louis | 2 |
| Sykes, Robert C. | 2 |
| Ansley, Timothy N. | 1 |
| Ban, Jae-Chun | 1 |
| Chang, Shun-Wen | 1 |
| Chen, Shu-Ying | 1 |
| Chuah, Siang Chee | 1 |
| More ▼ | |
Publication Type
| Reports - Evaluative | 12 |
| Journal Articles | 10 |
| Speeches/Meeting Papers | 9 |
| Reports - Research | 8 |
| Dissertations/Theses -… | 1 |
Education Level
Audience
| Practitioners | 1 |
Location
| Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Law School Admission Test | 1 |
What Works Clearinghouse Rating
Ersen, Rabia Karatoprak; Lee, Won-Chan – Journal of Educational Measurement, 2023
The purpose of this study was to compare calibration and linking methods for placing pretest item parameter estimates on the item pool scale in a 1-3 computerized multistage adaptive testing design in terms of item parameter recovery. Two models were used: embedded-section, in which pretest items were administered within a separate module, and…
Descriptors: Pretesting, Test Items, Computer Assisted Testing, Adaptive Testing
Yuan, Lu; Huang, Yingshi; Li, Shuhang; Chen, Ping – Journal of Educational Measurement, 2023
Online calibration is a key technology for item calibration in computerized adaptive testing (CAT) and has been widely used in various forms of CAT, including unidimensional CAT, multidimensional CAT (MCAT), CAT with polytomously scored items, and cognitive diagnostic CAT. However, as multidimensional and polytomous assessment data become more…
Descriptors: Computer Assisted Testing, Adaptive Testing, Computation, Test Items
Yu Wang – ProQuest LLC, 2024
The multiple-choice (MC) item format has been widely used in educational assessments across diverse content domains. MC items purportedly allow for collecting richer diagnostic information. The effectiveness and economy of administering MC items may have further contributed to their popularity not just in educational assessment. The MC item format…
Descriptors: Multiple Choice Tests, Cognitive Tests, Cognitive Measurement, Educational Diagnosis
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Chen, Ping – Journal of Educational and Behavioral Statistics, 2017
Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…
Descriptors: Test Items, Item Response Theory, Test Construction, Adaptive Testing
Sahin, Alper; Weiss, David J. – Educational Sciences: Theory and Practice, 2015
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Sample Size, Item Banks
Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min – Educational Technology & Society, 2012
In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…
Descriptors: Adaptive Testing, Test Items, Computer Assisted Testing, Mathematics
Performance of Item Exposure Control Methods in Computerized Adaptive Testing: Further Explorations.
Chang, Shun-Wen; Ansley, Timothy N.; Lin, Sieh-Hwa – 2000
This study examined the effectiveness of the Sympson and Hetter conditional procedure (SHC), a modification of the Sympson and Hetter (1985) algorithm, in controlling the exposure rates of items in a computerized adaptive testing (CAT) environment. The properties of the procedure were compared with those of the Davey and Parshall (1995) and the…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Banks
Peer reviewedDodd, Barbara G.; And Others – Applied Psychological Measurement, 1989
General guidelines are developed to assist practitioners in devising operational computerized adaptive testing systems based on the graded response model. The effects of the following major variables were examined: item pool size; stepsize used along the trait continuum until maximum likelihood estimation could be calculated; and stopping rule…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Item Banks
Roussos, Louis; Nandakumar, Ratna; Cwikla, Julie – 2000
CATSIB is a differential item functioning (DIF) assessment methodology for computerized adaptive test (CAT) data. Kernel smoothing (KS) is a technique for nonparametric estimation of item response functions. In this study an attempt has been made to develop a more efficient DIF procedure for CAT data, KS-CATSIB, by combining CATSIB with kernel…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Bias, Item Response Theory
Patsula, Liane N.; Pashley, Peter J. – 1997
Many large-scale testing programs routinely pretest new items alongside operational (or scored) items to determine their empirical characteristics. If these pretest items pass certain statistical criteria, they are placed into an operational item pool; otherwise they are edited and re-pretested or simply discarded. In these situations, reliable…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Banks
Peer reviewedKoch, William R.; Dodd, Barbara G. – Educational and Psychological Measurement, 1995
Basic procedures for performing computerized adaptive testing based on the successive intervals (SI) Rasch model were evaluated. The SI model was applied to simulated and real attitude data sets. Item pools as small as 30 items performed well, and the model appeared practical for Likert-type data. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Item Response Theory
Nandakumar, Ratna; Roussos, Louis – 1997
This paper investigates the performance of CATSIB (a modified version of the SIBTEST computer program) to assess differential item functioning (DIF) in the context of computerized adaptive testing (CAT). One of the distinguishing features of CATSIB is its theoretically built-in regression correction to control for the Type I error rates when the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Bias, Power (Statistics)
Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard – Applied Measurement in Education, 2006
Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…
Descriptors: Test Length, Sample Size, Adaptive Testing, Item Response Theory
Zwick, Rebecca – 1995
This paper describes a study, now in progress, of new methods for representing the sampling variability of Mantel-Haenszel differential item functioning (DIF) results, based on the system for categorizing the severity of DIF that is now in place at the Educational Testing Service. The methods, which involve a Bayesian elaboration of procedures…
Descriptors: Adaptive Testing, Bayesian Statistics, Classification, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1 | 2
Direct link
