NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Morris, Scott B.; Bass, Michael; Howard, Elizabeth; Neapolitan, Richard E. – International Journal of Testing, 2020
The standard error (SE) stopping rule, which terminates a computer adaptive test (CAT) when the "SE" is less than a threshold, is effective when there are informative questions for all trait levels. However, in domains such as patient-reported outcomes, the items in a bank might all target one end of the trait continuum (e.g., negative…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Banks, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
LaFlair, Geoffrey T.; Langenfeld, Thomas; Baig, Basim; Horie, André Kenji; Attali, Yigal; von Davier, Alina A. – Journal of Computer Assisted Learning, 2022
Background: Digital-first assessments leverage the affordances of technology in all elements of the assessment process--from design and development to score reporting and evaluation to create test taker-centric assessments. Objectives: The goal of this paper is to describe the engineering, machine learning, and psychometric processes and…
Descriptors: Computer Assisted Testing, Affordances, Scoring, Engineering
Qunbar, Sa'ed Ali – ProQuest LLC, 2019
This work presents a study that used distributed language representations of test items to model test item difficulty. Distributed language representations are low-dimensional numeric representations of written language inspired and generated by artificial neural network architecture. The research begins with a discussion of the importance of item…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Melek Gulsah – International Journal of Assessment Tools in Education, 2020
Computer Adaptive Multistage Testing (ca-MST), which take the advantage of computer technology and adaptive test form, are widely used, and are now a popular issue of assessment and evaluation. This study aims at analyzing the effect of different panel designs, module lengths, and different sequence of a parameter value across stages and change in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koch, Marco; Spinath, Frank M.; Greiff, Samuel; Becker, Nicolas – Journal of Intelligence, 2022
Figural matrices tasks are one of the most prominent item formats used in intelligence tests, and their relevance for the assessment of cognitive abilities is unquestionable. However, despite endeavors of the open science movement to make scientific research accessible on all levels, there is a lack of royalty-free figural matrices tests. The Open…
Descriptors: Intelligence, Intelligence Tests, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Mao, Xiuzhen; Ozdemir, Burhanettin; Wang, Yating; Xiu, Tao – Online Submission, 2016
Four item selection indexes with and without exposure control are evaluated and compared in multidimensional computerized adaptive testing (CAT). The four item selection indices are D-optimality, Posterior expectation Kullback-Leibler information (KLP), the minimized error variance of the linear combination score with equal weight (V1), and the…
Descriptors: Comparative Analysis, Adaptive Testing, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Lai, Hollis – International Journal of Testing, 2012
Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…
Descriptors: Foreign Countries, Psychometrics, Test Construction, Test Items
He, Wei – ProQuest LLC, 2010
Item pool quality has been regarded as one important factor to help realize enhanced measurement quality for the computerized adaptive test (CAT) (e.g., Flaugher, 2000; Jensema, 1977; McBride & Wise, 1976; Reckase, 1976; 2003; van der Linden, Ariel, & Veldkamp, 2006; Veldkamp & van der Linden, 2000; Xing & Hambleton, 2004). However, studies are…
Descriptors: Test Items, Computer Assisted Testing, Item Analysis, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Huebner, Alan – Practical Assessment, Research & Evaluation, 2010
Cognitive diagnostic modeling has become an exciting new field of psychometric research. These models aim to diagnose examinees' mastery status of a group of discretely defined skills, or attributes, thereby providing them with detailed information regarding their specific strengths and weaknesses. Combining cognitive diagnosis with computer…
Descriptors: Cognitive Tests, Diagnostic Tests, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Makransky, Guido; Glas, Cees A. W. – Journal of Applied Testing Technology, 2010
An accurately calibrated item bank is essential for a valid computerized adaptive test. However, in some settings, such as occupational testing, there is limited access to test takers for calibration. As a result of the limited access to possible test takers, collecting data to accurately calibrate an item bank in an occupational setting is…
Descriptors: Foreign Countries, Simulation, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Glas, Cees A. W.; Geerlings, Hanneke – Studies in Educational Evaluation, 2009
Pupil monitoring systems support the teacher in tailoring teaching to the individual level of a student and in comparing the progress and results of teaching with national standards. The systems are based on the availability of an item bank calibrated using item response theory. The assessment of the students' progress and results can be further…
Descriptors: Item Banks, Adaptive Testing, National Standards, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Ramon Barrada, Juan; Veldkamp, Bernard P.; Olea, Julio – Applied Psychological Measurement, 2009
Computerized adaptive testing is subject to security problems, as the item bank content remains operative over long periods and administration time is flexible for examinees. Spreading the content of a part of the item bank could lead to an overestimation of the examinees' trait level. The most common way of reducing this risk is to impose a…
Descriptors: Item Banks, Adaptive Testing, Item Analysis, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Xing, Dehui; Hambleton, Ronald K. – Educational and Psychological Measurement, 2004
Computer-based testing by credentialing agencies has become common; however, selecting a test design is difficult because several good ones are available - parallel forms, computer adaptive (CAT), and multistage (MST). In this study, three computer-based test designs under some common examination conditions were investigated. Item bank size and…
Descriptors: Test Construction, Psychometrics, Item Banks, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ariel, Adelaide; Veldkamp, Bernard P.; Breithaupt, Krista – Applied Psychological Measurement, 2006
Computerized multistage testing (MST) designs require sets of test questions (testlets) to be assembled to meet strict, often competing criteria. Rules that govern testlet assembly may dictate the number of questions on a particular subject or may describe desirable statistical properties for the test, such as measurement precision. In an MST…
Descriptors: Item Response Theory, Item Banks, Psychometrics, Test Items
Previous Page | Next Page »
Pages: 1  |  2