NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Harold Doran; Testsuhiro Yamada; Ted Diaz; Emre Gonulates; Vanessa Culver – Journal of Educational Measurement, 2025
Computer adaptive testing (CAT) is an increasingly common mode of test administration offering improved test security, better measurement precision, and the potential for shorter testing experiences. This article presents a new item selection algorithm based on a generalized objective function to support multiple types of testing conditions and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Albano, Anthony D.; Cai, Liuhan; Lease, Erin M.; McConnell, Scott R. – Journal of Educational Measurement, 2019
Studies have shown that item difficulty can vary significantly based on the context of an item within a test form. In particular, item position may be associated with practice and fatigue effects that influence item parameter estimation. The purpose of this research was to examine the relevance of item position specifically for assessments used in…
Descriptors: Test Items, Computer Assisted Testing, Item Analysis, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A. – Journal of Educational Measurement, 2009
Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…
Descriptors: Identification, Genetics, Test Construction, Mathematics
Peer reviewed Peer reviewed
Weiner, John A.; Gibson, Wade M. – Journal of Educational Measurement, 1998
Describes a procedure for automated-test-forms assembly based on Classical Test Theory (CTT). The procedure uses stratified random-content sampling and test-form preequating to ensure both content and psychometric equivalence in generating virtually unlimited parallel forms. Extends the usefulness of CTT in automated test construction. (Author/SLD)
Descriptors: Automation, Computer Assisted Testing, Equated Scores, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Almond, Russell G.; DiBello, Louis V.; Moulder, Brad; Zapata-Rivera, Juan-Diego – Journal of Educational Measurement, 2007
This paper defines Bayesian network models and examines their applications to IRT-based cognitive diagnostic modeling. These models are especially suited to building inference engines designed to be synchronous with the finer grained student models that arise in skills diagnostic assessment. Aspects of the theory and use of Bayesian network models…
Descriptors: Inferences, Models, Item Response Theory, Cognitive Measurement
Peer reviewed Peer reviewed
Vispoel, Walter P.; Hendrickson, Amy B.; Bleiler, Timothy – Journal of Educational Measurement, 2000
Evaluated the effectiveness of vocabulary computerized adaptive tests (CATs) with restricted review in a live testing setting involving 242 college students in which special efforts were made to increase test efficiency and reduce the possibility of obtaining positively biased proficiency estimates. Results suggest the efficacy of allowing limited…
Descriptors: Adaptive Testing, Attitudes, College Students, Computer Assisted Testing
Peer reviewed Peer reviewed
Vispoel, Walter P. – Journal of Educational Measurement, 1998
Studied effects of administration mode [computer adaptive test (CAT) versus self-adaptive test (SAT)], item-by-item answer feedback, and test anxiety on results from computerized vocabulary tests taken by 293 college students. CATs were more reliable than SATs, and administration time was less when feedback was provided. (SLD)
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Feedback
Peer reviewed Peer reviewed
Bejar, Isaac I. – Journal of Educational Measurement, 1984
Approaches proposed for educational diagnostic assessment are reviewed and identified as deficit assessment and error analysis. The development of diagnostic instruments may require a reexamination of existing psychometric models and development of alternative ones. The psychometric and content demands of diagnostic assessment all but require test…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Criterion Referenced Tests, Diagnostic Tests
Peer reviewed Peer reviewed
Wang, Tianyou; Kolen, Michael J. – Journal of Educational Measurement, 2001
Reviews research literature on comparability issues in computerized adaptive testing (CAT) and synthesizes issues specific to comparability and test security. Develops a framework for evaluating comparability that contains three categories of criteria: (1) validity; (2) psychometric property/reliability; and (3) statistical assumption/test…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Criteria
Peer reviewed Peer reviewed
Gitomer, Drew H.; Yamamoto, Kentaro – Journal of Educational Measurement, 1991
A model integrating latent trait and latent class theories in characterizing individual performance on the basis of qualitative understanding is presented. This HYBRID model is illustrated through experiments with 119 Air Force technicians taking a paper-and-pencil test and 136 Air Force technicians taking a computerized test. (SLD)
Descriptors: Comparative Testing, Computer Assisted Testing, Educational Assessment, Item Response Theory
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Morley, Mary; Quardt, Dennis; Rock, Donald A.; Singley, Mark K.; Katz, Irvin R.; Nhouyvanisvong, Adisack – Journal of Educational Measurement, 1999
Evaluated a computer-delivered response type for measuring quantitative skill, the "Generating Examples" (GE) response type, which presents under-determined problems that can have many right answers. Results from 257 graduate students and applicants indicate that GE scores are reasonably reliable, but only moderately related to Graduate…
Descriptors: College Applicants, Computer Assisted Testing, Graduate Students, Graduate Study
Peer reviewed Peer reviewed
Wainer, Howard; Lewis, Charles – Journal of Educational Measurement, 1990
Three different applications of the testlet concept are presented, and the psychometric models most suitable for each application are described. Difficulties that testlets can help overcome include (1) context effects; (2) item ordering; and (3) content balancing. Implications for test construction are discussed. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Elementary Secondary Education, Item Response Theory