NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 121 to 135 of 223 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Nandakumar, Ratna; Roussos, Louis – Journal of Educational and Behavioral Statistics, 2004
A new procedure, CATSIB, for assessing differential item functioning (DIF) on computerized adaptive tests (CATs) is proposed. CATSIB, a modified SIBTEST procedure, matches test takers on estimated ability and controls for impact-induced Type 1 error inflation by employing a CAT version of the IBTEST "regression correction." The…
Descriptors: Evaluation, Adaptive Testing, Computer Assisted Testing, Pretesting
Peer reviewed Peer reviewed
Direct linkDirect link
Hol, A. Michiel; Vorst, Harrie C. M.; Mellenbergh, Gideon J. – Applied Psychological Measurement, 2007
In a randomized experiment (n = 515), a computerized and a computerized adaptive test (CAT) are compared. The item pool consists of 24 polytomous motivation items. Although items are carefully selected, calibration data show that Samejima's graded response model did not fit the data optimally. A simulation study is done to assess possible…
Descriptors: Student Motivation, Simulation, Adaptive Testing, Computer Assisted Testing
Glas, Cees A. W.; Vos, Hans J. – 1998
A version of sequential mastery testing is studied in which response behavior is modeled by an item response theory (IRT) model. First, a general theoretical framework is sketched that is based on a combination of Bayesian sequential decision theory and item response theory. A discussion follows on how IRT based sequential mastery testing can be…
Descriptors: Adaptive Testing, Bayesian Statistics, Item Response Theory, Mastery Tests
Pommerich, Mary; Segall, Daniel O. – 2003
Research discussed in this paper was conducted as part of an ongoing large-scale simulation study to evaluate methods of calibrating pretest items for computerized adaptive testing (CAT) pools. The simulation was designed to mimic the operational CAT Armed Services Vocational Aptitude Battery (ASVAB) testing program, in which a single pretest item…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Wang, Tianyou; Vispoel, Walter P. – Journal of Educational Measurement, 1998
Used simulations of computerized adaptive tests to evaluate results yielded by four commonly used ability estimation methods: maximum likelihood estimation (MLE) and three Bayesian approaches. Results show clear distinctions between MLE and Bayesian methods. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob – Applied Psychological Measurement, 1999
Theoretical null distributions of several fit statistic have been derived for paper-and-pencil tests. Examined whether these distributions also hold for computerized adaptive tests through simulation. Rates for two statistics studied were found to be similar in most cases. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Nering, Michael L. – Applied Psychological Measurement, 1997
Evaluated the distribution of person fit within the computerized-adaptive testing (CAT) environment through simulation. Found that, within the CAT environment, these indexes tend not to follow a standard normal distribution. Person fit indexes had means and standard deviations that were quite different from the expected. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Stocking, Martha L. – Applied Psychological Measurement, 1997
Investigated three models that permit restricted examinee control over revising previous answers in the context of adaptive testing, using simulation. Two models permitting item revisions worked well in preserving test fairness and accuracy, and one model may preserve some cognitive processing styles developed by examinees for a linear testing…
Descriptors: Adaptive Testing, Cognitive Processes, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yi, Qing; Zhang, Jinming; Chang, Hua-Hua – ETS Research Report Series, 2006
Chang and Zhang (2002, 2003) proposed several baseline criteria for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria were obtained from theoretical derivations that assumed uniformly randomized item selection. The current study investigated potential damage caused…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Computer Security
Parshall, Cynthia G.; Davey, Tim; Nering, Mike L. – 1998
When items are selected during a computerized adaptive test (CAT) solely with regard to their measurement properties, it is commonly found that certain items are administered to nearly every examinee, and that a small number of the available items will account for a large proportion of the item administrations. This presents a clear security risk…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Efficiency
Weissman, Alexander – 2003
This study investigated the efficiency of item selection in a computerized adaptive test (CAT), where efficiency was defined in terms of the accumulated test information at an examinee's true ability level. A simulation methodology compared the efficiency of 2 item selection procedures with 5 ability estimation procedures for CATs of 5, 10, 15,…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Maximum Likelihood Statistics
Reese, Lynda M.; Schnipke, Deborah L. – 1999
A two-stage design provides a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and based on their scores, they are routed to tests of different difficulty levels in the second stage. This design provides some of the benefits of standard computer adaptive testing (CAT), such as increased…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
van der Linden, Wim J.; Reese, Lynda M. – 2001
A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum information at the current ability estimate fixing…
Descriptors: Ability, Adaptive Testing, College Entrance Examinations, Computer Assisted Testing
Roussos, Louis; Nandakumar, Ratna; Cwikla, Julie – 2000
CATSIB is a differential item functioning (DIF) assessment methodology for computerized adaptive test (CAT) data. Kernel smoothing (KS) is a technique for nonparametric estimation of item response functions. In this study an attempt has been made to develop a more efficient DIF procedure for CAT data, KS-CATSIB, by combining CATSIB with kernel…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Bias, Item Response Theory
Peer reviewed Peer reviewed
Dodd, Barbara G.; Koch, William R. – Educational and Psychological Measurement, 1994
Simulated data were used to investigate the impact of characteristics of threshold values (number, symmetry, and distance between adjacent threshold values) and delta values on the distribution of item information in the successive intervals Rasch model. Implications for computerized adaptive attitude measurement are discussed. (SLD)
Descriptors: Adaptive Testing, Attitude Measures, Computer Assisted Testing, Item Response Theory
Pages: 1  |  ...  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15