NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 42 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kezer, Fatih – International Journal of Assessment Tools in Education, 2021
Item response theory provides various important advantages for exams carried out or to be carried out digitally. For computerized adaptive tests to be able to make valid and reliable predictions supported by IRT, good quality item pools should be used. This study examines how adaptive test applications vary in item pools which consist of items…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Moses, Tim – ETS Research Report Series, 2016
The purpose of this study is to evaluate the extent to which item response theory (IRT) proficiency estimation methods are robust to the presence of aberrant responses under the "GRE"® General Test multistage adaptive testing (MST) design. To that end, a wide range of atypical response behaviors affecting as much as 10% of the test items…
Descriptors: Item Response Theory, Computation, Robustness (Statistics), Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chun; Chang, Hua-Hua; Boughton, Keith A. – Applied Psychological Measurement, 2013
Multidimensional computerized adaptive testing (MCAT) is able to provide a vector of ability estimates for each examinee, which could be used to provide a more informative profile of an examinee's performance. The current literature on MCAT focuses on the fixed-length tests, which can generate less accurate results for those examinees whose…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Length, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HwaYoung; Dodd, Barbara G. – Educational and Psychological Measurement, 2012
This study investigated item exposure control procedures under various combinations of item pool characteristics and ability distributions in computerized adaptive testing based on the partial credit model. Three variables were manipulated: item pool characteristics (120 items for each of easy, medium, and hard item pools), two ability…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
He, Wei; Reckase, Mark D. – Educational and Psychological Measurement, 2014
For computerized adaptive tests (CATs) to work well, they must have an item pool with sufficient numbers of good quality items. Many researchers have pointed out that, in developing item pools for CATs, not only is the item pool size important but also the distribution of item parameters and practical considerations such as content distribution…
Descriptors: Item Banks, Test Length, Computer Assisted Testing, Adaptive Testing
Ho, Tsung-Han – ProQuest LLC, 2010
Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ozyurt, Hacer; Ozyurt, Ozcan; Baki, Adnan – Turkish Online Journal of Distance Education, 2012
Assessment is one of the methods used for evaluation of the learning outputs. Nowadays, use of adaptive assessment systems estimating ability level and abilities of the students is becoming widespread instead of traditional assessment systems. Adaptive assessment system evaluates students not only according to their marks that they take in test…
Descriptors: Computer System Design, Intelligent Tutoring Systems, Computer Software, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M. – International Journal of Testing, 2010
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Descriptors: Monte Carlo Methods, Simulation, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Bradlow, Eric T. – Journal of Educational and Behavioral Statistics, 1996
The three-parameter logistic (3-PL) model is described and a derivation of the 3-PL observed information function is presented for a single binary response from one examinee with known item parameters. Formulas are presented for the probability of negative information and for the expected information (always nonnegative). (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Segall, Daniel O. – Psychometrika, 2001
Proposed and evaluated two new methods of improving the measurement precision of a general test factor. One provides a multidimensional item response theory estimate based on administrations of multiple-choice test items that span general and nuisance dimensions, and the other chooses items adaptively to maximize the precision of the general…
Descriptors: Ability, Adaptive Testing, Item Response Theory, Measurement Techniques
Samejima, Fumiko – 1998
Item response theory (IRT) has been adapted as the theoretical foundation of computerized adaptive testing (CAT) for several decades. In applying IRT to CAT, there are certain considerations that are essential, and yet tend to be neglected. These essential issues are addressed in this paper, and then several ways of eliminating noise and bias in…
Descriptors: Ability, Adaptive Testing, Estimation (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
May, Kim; Nicewander, W. Alan – Educational and Psychological Measurement, 1998
The degree to which scale distortion in the ordinary difference score can be removed by using differences based on estimated examinee proficiency (theta) in either conventional or adaptive testing situations was studied using Item Response Theory. Using estimated thetas removed much scale distortion for both conventional and adaptive tests. (SLD)
Descriptors: Ability, Achievement Gains, Adaptive Testing, Estimation (Mathematics)
van der Linden, Wim J. – 1997
In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability estimator. In this paper, an empirical initialization of…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Stocking, Martha L. – 1988
The relationship between examinee ability and the accuracy of maximum likelihood item parameter estimation is explored in terms of the expected (Fisher) information. Information functions are used to find the optimum ability levels and maximum contributions to information for estimating item parameters in three commonly used logistic item response…
Descriptors: Ability, Adaptive Testing, Estimation (Mathematics), Item Response Theory
Bergstrom, Betty A.; Lunz, Mary E. – 1991
The equivalence of pencil and paper Rasch item calibrations when used in a computer adaptive test administration was explored in this study. Items (n=726) were precalibarted with the pencil and paper test administrations. A computer adaptive test was administered to 321 medical technology students using the pencil and paper precalibrations in the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3