NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Hula, William D.; Kellough, Stacey; Fergadiotis, Gerasimos – Journal of Speech, Language, and Hearing Research, 2015
Purpose: The purpose of this study was to develop a computerized adaptive test (CAT) version of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996), to reduce test length while maximizing measurement precision. This article is a direct extension of a companion article (Fergadiotis, Kellough, & Hula, 2015),…
Descriptors: Computer Assisted Testing, Adaptive Testing, Naming, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T. – Applied Psychological Measurement, 2013
Multistage testing, or MST, was developed as an alternative to computerized adaptive testing (CAT) for applications in which it is preferable to administer a test at the level of item sets (i.e., modules). As with CAT, the simulation technique in MST plays a critical role in the development and maintenance of tests. "MSTGen," a new MST…
Descriptors: Computer Assisted Testing, Adaptive Testing, Computer Software, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T. – Applied Psychological Measurement, 2013
Most computerized adaptive testing (CAT) programs do not allow test takers to review and change their responses because it could seriously deteriorate the efficiency of measurement and make tests vulnerable to manipulative test-taking strategies. Several modified testing methods have been developed that provide restricted review options while…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Testing
Peer reviewed Peer reviewed
Zwick, Rebecca – Educational and Psychological Measurement, 1997
Recent simulations have shown that, for a given sample size, the Mantel-Haenszel (MH) variances tend to be larger when items are administered to randomly selected examinees than when they are administered adaptively. Results suggest that adaptive testing may lead to more efficient application of MH differential item functioning analyses. (SLD)
Descriptors: Adaptive Testing, Item Bias, Sample Size, Simulation
McKinley, Robert L.; Reckase, Mark D. – 1983
A two-stage study was conducted to compare the ability estimates yielded by tailored testing procedures based on the one-parameter logistic (1PL) and three-parameter logistic (3PL) models. The first stage of the study employed real data, while the second stage employed simulated data. In the first stage, response data for 3,000 examinees were…
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Item Banks