NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Boyd, Aimee M.; Dodd, Barbara; Fitzpatrick, Steven – Applied Measurement in Education, 2013
This study compared several exposure control procedures for CAT systems based on the three-parameter logistic testlet response theory model (Wang, Bradlow, & Wainer, 2002) and Masters' (1982) partial credit model when applied to a pool consisting entirely of testlets. The exposure control procedures studied were the modified within 0.10 logits…
Descriptors: Computer Assisted Testing, Item Response Theory, Test Construction, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André – Applied Measurement in Education, 2016
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…
Descriptors: Psychometrics, Multiple Choice Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Edwards, Michael C.; Flora, David B.; Thissen, David – Applied Measurement in Education, 2012
This article describes a computerized adaptive test (CAT) based on the uniform item exposure multi-form structure (uMFS). The uMFS is a specialization of the multi-form structure (MFS) idea described by Armstrong, Jones, Berliner, and Pashley (1998). In an MFS CAT, the examinee first responds to a small fixed block of items. The items comprising…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Format, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Jodoin, Michael G.; Zenisky, April; Hambleton, Ronald K. – Applied Measurement in Education, 2006
Many credentialing agencies today are either administering their examinations by computer or are likely to be doing so in the coming years. Unfortunately, although several promising computer-based test designs are available, little is known about how well they function in examination settings. The goal of this study was to compare fixed-length…
Descriptors: Computers, Test Results, Psychometrics, Computer Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kong, Xiaojing – Applied Measurement in Education, 2005
When low-stakes assessments are administered, the degree to which examinees give their best effort is often unclear, complicating the validity and interpretation of the resulting test scores. This study introduces a new method, based on item response time, for measuring examinee test-taking effort on computer-based test items. This measure, termed…
Descriptors: Psychometrics, Validity, Reaction Time, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Pomplun, Mark; Custer, Michael – Applied Measurement in Education, 2005
In this study, we investigated possible context effects when students chose to defer items and answer those items later during a computerized test. In 4 primary school reading tests, 126 items were studied. Logistic regression analyses identified 4 items across 4 grade levels as statistically significant. However, follow-up analyses indicated that…
Descriptors: Psychometrics, Reading Tests, Effect Size, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Moshinsky, Avital; Kazin, Cathrael – Applied Measurement in Education, 2005
In recent years, there has been a large increase in the number of university applicants requesting special accommodations for university entrance exams. The Israeli National Institute for Testing and Evaluation (NITE) administers a Psychometric Entrance Test (comparable to the Scholastic Assessment Test in the United States) to assist universities…
Descriptors: Foreign Countries, Psychometrics, Disabilities, Testing Accommodations
Peer reviewed Peer reviewed
Stone, Gregory Ethan; Lunz, Mary E. – Applied Measurement in Education, 1994
Effects of reviewing items and altering responses on examinee ability estimates, test precision, test information, decision confidence, and pass/fail status were studied for 376 examinees taking 2 certification tests. Test precision is only slightly affected by review, and average information loss can be recovered by addition of one item. (SLD)
Descriptors: Ability, Adaptive Testing, Certification, Change
Peer reviewed Peer reviewed
Green, Bert F. – Applied Measurement in Education, 1988
Emerging areas and critical problems related to computer-based testing are identified. Topics covered include adaptive testing; calibration; item selection; multidimensional items; uses of information processing theory; relation to cognitive psychology; and tests of short-term and spatial memory, perceptual speed and accuracy, and movement…
Descriptors: Cognitive Tests, Computer Assisted Testing, Content Validity, Information Processing
Peer reviewed Peer reviewed
Vispoel, Walter P.; Coffman, Don D. – Applied Measurement in Education, 1994
Computerized-adaptive (CAT) and self-adapted (SAT) music listening tests were compared for efficiency, reliability, validity, and motivational benefits with 53 junior high school students. Results demonstrate trade-offs, with greater potential motivational benefits for SAT and greater efficiency for CAT. SAT elicited more favorable responses from…
Descriptors: Adaptive Testing, Computer Assisted Testing, Efficiency, Item Response Theory