NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)7
Audience
Location
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff – Journal of Educational Measurement, 2016
Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Format, Sequential Approach
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua – Applied Psychological Measurement, 2013
Through simulated data, five multidimensional computerized adaptive testing (MCAT) selection procedures with varying test lengths are examined and compared using different stopping rules. Fixed item exposure rates are used for all the items, and the Priority Index (PI) method is used for the content constraints. Two stopping rules, standard error…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium designed to create next-generation assessments that, compared to traditional K-12 assessments, more accurately measure student progress toward college and career readiness. The PARCC assessments are aligned to the Common Core State Standards…
Descriptors: Standardized Tests, Career Readiness, College Readiness, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Becker, Kirk A.; Bergstrom, Betty A. – Practical Assessment, Research & Evaluation, 2013
The need for increased exam security, improved test formats, more flexible scheduling, better measurement, and more efficient administrative processes has caused testing agencies to consider converting the administration of their exams from paper-and-pencil to computer-based testing (CBT). Many decisions must be made in order to provide an optimal…
Descriptors: Testing, Models, Testing Programs, Program Administration
Peer reviewed Peer reviewed
Direct linkDirect link
Lane, Suzanne; Leventhal, Brian – Review of Research in Education, 2015
This chapter addresses the psychometric challenges in assessing English language learners (ELLs) and students with disabilities (SWDs). The first section addresses some general considerations in the assessment of ELLs and SWDs, including the prevalence of ELLs and SWDs in the student population, federal and state legislation that requires the…
Descriptors: Psychometrics, Evaluation Problems, English Language Learners, Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Kettler, Ryan J. – Review of Research in Education, 2015
This chapter introduces theory that undergirds the role of testing adaptations in assessment, provides examples of item modifications and testing accommodations, reviews research relevant to each, and introduces a new paradigm that incorporates opportunity to learn (OTL), academic enablers, testing adaptations, and inferences that can be made from…
Descriptors: Meta Analysis, Literature Reviews, Testing, Testing Accommodations
Zhang, Yanwei; Breithaupt, Krista; Tessema, Aster; Chuah, David – Online Submission, 2006
Two IRT-based procedures to estimate test reliability for a certification exam that used both adaptive (via a MST model) and non-adaptive design were considered in this study. Both procedures rely on calibrated item parameters to estimate error variance. In terms of score variance, one procedure (Method 1) uses the empirical ability distribution…
Descriptors: Individual Testing, Test Reliability, Programming, Error of Measurement
Peer reviewed Peer reviewed
Divgi, D. R. – Applied Psychological Measurement, 1989
Two methods for estimating the reliability of a computerized adaptive test (CAT) without using item response theory are presented. The data consist of CAT and paper-and-pencil scores from identical or equivalent samples, and scores for all examinees on one or more covariates, using the Armed Services Vocational Aptitude Battery. (TJH)
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Predictive Validity
Parshall, Cynthia G. – Journal of Instruction Delivery Systems, 1995
Summarizes the benefits of computerized assessment and provides a review of some practical issues concerning measurement, item and examinee characteristics, hardware, and software. Adequate measures of reliability and validity have been established for many computer-based tests, and the benefits of computer testing have been realized in applied…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computers, Test Items
Peer reviewed Peer reviewed
McKinley, Robert L.; Reckase, Mark D. – AEDS Journal, 1980
Describes tailored testing (in which a computer selects appropriate items from an item bank while an examinee is taking a test) and shows it to be superior to paper-and-pencil tests in such areas as reliability, security, and appropriateness of items. (IRT)
Descriptors: Adaptive Testing, Computer Assisted Testing, Higher Education, Program Evaluation
Peer reviewed Peer reviewed
Bennett, Randy Elliot; And Others – Special Services in the Schools, 1988
A study of Scholastic Aptitude Test scores for nine groups of students with disabilities taking special test administrations found differences in score levels among disability groups but no significant differences of measurement precision and no evidence of disadvantage for disabled students. (Author/MSE)
Descriptors: Adaptive Testing, College Entrance Examinations, Comparative Analysis, Disabilities
Samejima, Fumiko – 1990
Test validity is a concept that has often been ignored in the context of latent trait models and in modern test theory, particularly as it relates to computerized adaptive testing. Some considerations about the validity of a test and of a single item are proposed. This paper focuses on measures that are population-free and that will provide local…
Descriptors: Adaptive Testing, Computer Assisted Testing, Equations (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
Luecht, Richard M. – Applied Psychological Measurement, 1996
The example of a medical licensure test is used to demonstrate situations in which complex, integrated content must be balanced at the total test level for validity reasons, but items assigned to reportable subscore categories may be used under a multidimensional item response theory adaptive paradigm to improve subscore reliability. (SLD)
Descriptors: Adaptive Testing, Certification, Computer Assisted Testing, Licensing Examinations (Professions)
Larson, Jerry W. – 1987
The development of a Spanish computerized adaptive placement test at Brigham Young University (Utah) is described. The test is based on an approach that locates an examinee's ability along a continuum of latent ability rather than determining the degree to which an individual's ability differs from the abilities of others taking the test, as in…
Descriptors: Adaptive Testing, Computer Assisted Testing, Higher Education, Language Tests
Stone, Gregory Ethan; Lunz, Mary E. – 1994
This paper explores the comparability of item calibrations for three types of items: (1) text only; (2) text with photographs; and (3) text plus graphics when items are presented on written tests and computerized adaptive tests. Data are from five different medical technology certification examinations administered nationwide in 1993. The Rasch…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Diagrams
Previous Page | Next Page ยป
Pages: 1  |  2