Descriptor
| Adaptive Testing | 11 |
| Computer Assisted Testing | 11 |
| Estimation (Mathematics) | 11 |
| Selection | 11 |
| Test Items | 10 |
| Ability | 9 |
| Simulation | 5 |
| Test Construction | 5 |
| Bayesian Statistics | 4 |
| Criteria | 3 |
| Item Response Theory | 3 |
| More ▼ | |
Author
Publication Type
| Reports - Evaluative | 7 |
| Speeches/Meeting Papers | 5 |
| Reports - Research | 4 |
| Journal Articles | 3 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| COMPASS (Computer Assisted… | 1 |
| Law School Admission Test | 1 |
What Works Clearinghouse Rating
Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai – 2000
Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Hau, Kit-Tai; Wen, Jian-Bing; Chang, Hua-Hua – 2002
In the a-stratified method, a popular and efficient item exposure control strategy proposed by H. Chang (H. Chang and Z. Ying, 1999; K. Hau and H. Chang, 2001) for computerized adaptive testing (CAT), the item pool and item selection process has usually been divided into four strata and the corresponding four stages. In a series of simulation…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedChen, Shu-Ying; Ankenmann, Robert D.; Chang, Hua-Hua – Applied Psychological Measurement, 2000
Compared five item selection rules with respect to the efficiency and precision of trait (theta) estimation at the early stages of computerized adaptive testing (CAT). The Fisher interval information, Fisher information with a posterior distribution, Kullback-Leibler information, and Kullback-Leibler information with a posterior distribution…
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Selection
Tang, K. Linda – 1996
The average Kullback-Keibler (K-L) information index (H. Chang and Z. Ying, in press) is a newly proposed statistic in Computerized Adaptive Testing (CAT) item selection based on the global information function. The objectives of this study were to improve understanding of the K-L index with various parameters and to compare the performance of the…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Peer reviewedBerger, Martijn P. F.; Veerkamp, Wim J. J. – Journal of Educational and Behavioral Statistics, 1997
Some alternative criteria for item selection in adaptive testing are proposed that take into account uncertainty in the ability estimates. A simulation study shows that the likelihood weighted information criterion is a good alternative to the maximum information criterion. Another good alternative uses a Bayesian expected a posteriori estimator.…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewedHetter, Rebecca D.; And Others – Applied Psychological Measurement, 1994
Effects on computerized adaptive test score of using a paper-and-pencil (P&P) calibration to select items and estimate scores were compared with effects of using computer calibration. Results with 2,999 Navy recruits support the use of item parameters calibrated from either P&P or computer administrations. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
van der Linden, Wim J. – 1996
R. J. Owen (1975) proposed an approximate empirical Bayes procedure for item selection in adaptive testing. The procedure replaces the true posterior by a normal approximation with closed-form expressions for its first two moments. This approximation was necessary to minimize the computational complexity involved in a fully Bayesian approach, but…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computation
Bizot, Elizabeth B.; Goldman, Steven H. – 1994
A study was conducted to evaluate the effects of choice of item response theory (IRT) model, parameter calibration group, starting ability estimate, and stopping criterion on the conversion of an 80-item vocabulary test to computer adaptive format. Three parameter calibration groups were tested: (1) a group of 1,000 high school seniors, (2) a…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
van der Linden, Wim J.; Reese, Lynda M. – 1997
A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum information at the current ability estimate fixing…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
In this study some alternative item selection criteria for adaptive testing are proposed. These criteria take into account the uncertainty of the ability estimates. A general weighted information criterion is suggested of which the usual maximum information criterion and the suggested alternative criteria are special cases. A simulation study was…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Spray, Judith A.; Reckase, Mark D. – 1994
The issue of test-item selection in support of decision making in adaptive testing is considered. The number of items needed to make a decision is compared for two approaches: selecting items from an item pool that are most informative at the decision point or selecting items that are most informative at the examinee's ability level. The first…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing


