Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 7 |
Descriptor
Adaptive Testing | 17 |
Simulation | 17 |
Test Bias | 17 |
Computer Assisted Testing | 14 |
Test Items | 10 |
Item Response Theory | 7 |
Scores | 6 |
Comparative Analysis | 5 |
Ability | 3 |
Bayesian Statistics | 3 |
Item Banks | 3 |
More ▼ |
Source
Journal of Educational… | 2 |
Applied Psychological… | 1 |
ETS Research Report Series | 1 |
EURASIA Journal of… | 1 |
Educational and Psychological… | 1 |
International Journal of… | 1 |
Journal of Educational and… | 1 |
Author
Weiss, David J. | 3 |
McBride, James R. | 2 |
Reckase, Mark D. | 2 |
Stocking, Martha L. | 2 |
Ali, Usama S. | 1 |
Capar, Nilufer K. | 1 |
Chang, Hua-Hua | 1 |
Chen, Shu-Ying | 1 |
Daud, Muslem | 1 |
Kuo, Bor-Chen | 1 |
Lei, Pui-Wa | 1 |
More ▼ |
Publication Type
Reports - Research | 10 |
Journal Articles | 8 |
Reports - Evaluative | 7 |
Speeches/Meeting Papers | 3 |
Education Level
Grade 9 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
Indonesia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Pohl, Steffi – Journal of Educational Measurement, 2013
This article introduces longitudinal multistage testing (lMST), a special form of multistage testing (MST), as a method for adaptive testing in longitudinal large-scale studies. In lMST designs, test forms of different difficulty levels are used, whereas the values on a pretest determine the routing to these test forms. Since lMST allows for…
Descriptors: Adaptive Testing, Longitudinal Studies, Difficulty Level, Comparative Analysis
Ali, Usama S.; Chang, Hua-Hua – ETS Research Report Series, 2014
Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…
Descriptors: Adaptive Testing, Simulation, Pretests Posttests, Test Items
Kuo, Bor-Chen; Daud, Muslem; Yang, Chih-Wei – EURASIA Journal of Mathematics, Science & Technology Education, 2015
This paper describes a curriculum-based multidimensional computerized adaptive test that was developed for Indonesia junior high school Biology. In adherence to the Indonesian curriculum of different Biology dimensions, 300 items was constructed, and then tested to 2238 students. A multidimensional random coefficients multinomial logit model was…
Descriptors: Secondary School Science, Science Education, Science Tests, Computer Assisted Testing
Rulison, Kelly L.; Loken, Eric – Applied Psychological Measurement, 2009
A difficult result to interpret in Computerized Adaptive Tests (CATs) occurs when an ability estimate initially drops and then ascends continuously until the test ends, suggesting that the true ability may be higher than implied by the final estimate. This study explains why this asymmetry occurs and shows that early mistakes by high-ability…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Academic Ability
Papanastasiou, Elena C.; Reckase, Mark D. – International Journal of Testing, 2007
Because of the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT from an examinee's point of view is that in…
Descriptors: Simulation, Adaptive Testing, Computer Assisted Testing, Test Items
Penfield, Randall D. – Educational and Psychological Measurement, 2007
The standard error of the maximum likelihood ability estimator is commonly estimated by evaluating the test information function at an examinee's current maximum likelihood estimate (a point estimate) of ability. Because the test information function evaluated at the point estimate may differ from the test information function evaluated at an…
Descriptors: Simulation, Adaptive Testing, Computation, Maximum Likelihood Statistics
Nandakumar, Ratna; Roussos, Louis – Journal of Educational and Behavioral Statistics, 2004
A new procedure, CATSIB, for assessing differential item functioning (DIF) on computerized adaptive tests (CATs) is proposed. CATSIB, a modified SIBTEST procedure, matches test takers on estimated ability and controls for impact-induced Type 1 error inflation by employing a CAT version of the IBTEST "regression correction." The…
Descriptors: Evaluation, Adaptive Testing, Computer Assisted Testing, Pretesting
Thomasson, Gary L. – 1997
Score comparability is important to those who take tests and those who use them. One important concept related to test score comparability is that of "equity," which is defined as existing when examinees are indifferent as to which of two alternate forms of a test they would prefer to take. By their nature, computerized adaptive tests…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Lei, Pui-Wa; Chen, Shu-Ying; Yu, Lan – Journal of Educational Measurement, 2006
Mantel-Haenszel and SIBTEST, which have known difficulty in detecting non-unidirectional differential item functioning (DIF), have been adapted with some success for computerized adaptive testing (CAT). This study adapts logistic regression (LR) and the item-response-theory-likelihood-ratio test (IRT-LRT), capable of detecting both unidirectional…
Descriptors: Evaluation Methods, Test Bias, Computer Assisted Testing, Multiple Regression Analysis
McBride, James R.; Weiss, David J. – 1976
Four monte carlo simulation studies of Owen's Bayesian sequential procedure for adaptive mental testing were conducted. Whereas previous simulation studies of this procedure have concentrated on evaluating it in terms of the correlation of its test scores with simulated ability in a normal population, these four studies explored a number of…
Descriptors: Adaptive Testing, Bayesian Statistics, Branching, Computer Oriented Programs

Capar, Nilufer K. – 2000
This study investigated specific conditions under which out-of-scale information improves measurement precision and the factors that influence the degree of reliability gains and the amount of bias induced in the reported scores when out-of-scale information is used. In-scale information is information that an item provides for a composite trait…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Potenza, Maria T.; Stocking, Martha L. – 1994
A multiple choice test item is identified as flawed if it has no single best answer. In spite of extensive quality control procedures, the administration of flawed items to test-takers is inevitable. Common strategies for dealing with flawed items in conventional testing, grounded in the principle of fairness to test-takers, are reexamined in the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Multiple Choice Tests, Scoring
Stocking, Martha L. – 1996
The interest in the application of large-scale computerized adaptive testing has served to focus attention on issues that arise when theoretical advances are made operational. Some of these issues stem less from changes in testing conditions and more from changes in testing paradigms. One such issue is that of the order in which questions are…
Descriptors: Adaptive Testing, Cognitive Processes, Comparative Analysis, Computer Assisted Testing
Slater, Sharon C.; Schaeffer, Gary A. – 1996
The General Computer Adaptive Test (CAT) of the Graduate Record Examinations (GRE) includes three operational sections that are separately timed and scored. A "no score" is reported if the examinee answers fewer than 80% of the items or if the examinee does not answer all of the items and leaves the section before time expires. The 80%…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Equal Education
The Effect of Item Choice on Ability Estimation When Using a Simple Logistic Tailored Testing Model.
Reckase, Mark D. – 1975
This paper explores the effects of item choice on ability estimation when using a tailored testing procedure based on the Rasch simple logistic model. Most studies of the simple logistic model imply that ability estimates are totally independent of the items used, regardless of the testing procedure. This paper shows that the ability estimate is…
Descriptors: Ability, Achievement Tests, Adaptive Testing, Individual Differences
Previous Page | Next Page ยป
Pages: 1 | 2