NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Practitioners1
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua – Applied Psychological Measurement, 2013
Through simulated data, five multidimensional computerized adaptive testing (MCAT) selection procedures with varying test lengths are examined and compared using different stopping rules. Fixed item exposure rates are used for all the items, and the Priority Index (PI) method is used for the content constraints. Two stopping rules, standard error…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Kuo, Bor-Chen; Daud, Muslem; Yang, Chih-Wei – EURASIA Journal of Mathematics, Science & Technology Education, 2015
This paper describes a curriculum-based multidimensional computerized adaptive test that was developed for Indonesia junior high school Biology. In adherence to the Indonesian curriculum of different Biology dimensions, 300 items was constructed, and then tested to 2238 students. A multidimensional random coefficients multinomial logit model was…
Descriptors: Secondary School Science, Science Education, Science Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lane, Suzanne; Leventhal, Brian – Review of Research in Education, 2015
This chapter addresses the psychometric challenges in assessing English language learners (ELLs) and students with disabilities (SWDs). The first section addresses some general considerations in the assessment of ELLs and SWDs, including the prevalence of ELLs and SWDs in the student population, federal and state legislation that requires the…
Descriptors: Psychometrics, Evaluation Problems, English Language Learners, Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Reise, Steven P.; Ventura, Joseph; Keefe, Richard S. E.; Baade, Lyle E.; Gold, James M.; Green, Michael F.; Kern, Robert S.; Mesholam-Gately, Raquelle; Nuechterlein, Keith H.; Seidman, Larry J.; Bilder, Robert – Psychological Assessment, 2011
A psychometric analysis of 2 interview-based measures of cognitive deficits was conducted: the 21-item Clinical Global Impression of Cognition in Schizophrenia (CGI-CogS; Ventura et al., 2008), and the 20-item Schizophrenia Cognition Rating Scale (SCoRS; Keefe et al., 2006), which were administered on 2 occasions to a sample of people with…
Descriptors: Schizophrenia, Adaptive Testing, Rating Scales, Social Cognition
Zhang, Yanwei; Breithaupt, Krista; Tessema, Aster; Chuah, David – Online Submission, 2006
Two IRT-based procedures to estimate test reliability for a certification exam that used both adaptive (via a MST model) and non-adaptive design were considered in this study. Both procedures rely on calibrated item parameters to estimate error variance. In terms of score variance, one procedure (Method 1) uses the empirical ability distribution…
Descriptors: Individual Testing, Test Reliability, Programming, Error of Measurement
Peer reviewed Peer reviewed
Haladyna, Thomas M.; Roid, Gale H. – Journal of Educational Measurement, 1983
The present study showed that Rasch-based adaptive tests--when item domains were finite and specifiable--had greater precision in domain score estimation than test forms created by random sampling of items. Results were replicated across four data sources representing a variety of criterion-referenced, domain-based tests varying in length.…
Descriptors: Adaptive Testing, Criterion Referenced Tests, Error of Measurement, Estimation (Mathematics)
Peer reviewed Peer reviewed
Zwick, Rebecca; And Others – Applied Psychological Measurement, 1994
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel method of differential item functioning (DIF) analysis in computerized adaptive tests (CAT). Results indicate that CAT-based DIF procedures perform well and support the use of item response theory-based matching variables in DIF analysis. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Error of Measurement
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – Educational Testing Service, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Adaptive Testing, Test Items, Computation, Context Effect
Gustafsson, Jan-Eric – 1977
The Rasch model for test analysis is described and compared with two-parameter and three-parameter latent-trait models. Conditional maximum likelihood equations for estimating item parameters are derived, and estimates of person parameters are described together with their confidence intervals. Goodness of fit tests are discussed, including a…
Descriptors: Adaptive Testing, Computer Programs, Equated Scores, Error of Measurement
Legg, Sue M.; Buhr, Dianne C. – 1990
Possible causes of a 16-point mean score increase for the computer adaptive form of the College Level Academic Skills Test (CLAST) in reading over the paper-and-pencil test (PPT) in reading are examined. The adaptive form of the CLAST was used in a state-wide field test in which reading, writing, and computation scores for approximately 1,000…
Descriptors: Adaptive Testing, College Entrance Examinations, Community Colleges, Comparative Testing
College Entrance Examination Board, Princeton, NJ. – 1990
This guide is designed to provide essential background material about the College Board's Computerized Placement Tests (CPTs). It is recommended for administrators and staff alike. It contains the theory on which the tests are based, information concerning how to administer them, and discussions of the reports produced and how to interpret the…
Descriptors: Adaptive Testing, Algebra, Arithmetic, College Entrance Examinations