NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Süleyman Demir; Derya Çobanoglu Aktan; Nese Güler – International Journal of Assessment Tools in Education, 2023
This study has two main purposes. Firstly, to compare the different item selection methods and stopping rules used in Computerized Adaptive Testing (CAT) applications with simulative data generated based on the item parameters of the Vocational Maturity Scale. Secondly, to test the validity of CAT application scores. For the first purpose,…
Descriptors: Computer Assisted Testing, Adaptive Testing, Vocational Maturity, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Education Inquiry, 2019
A decision of whether to move from paper-and-pencil to computer-based tests is based largely on a careful weighing of the potential benefits of a change against its costs, disadvantages, and challenges. This paper briefly discusses the trade-offs involved in making such a transition, and then focuses on a relatively unexplored benefit of…
Descriptors: Computer Assisted Testing, Cheating, Test Wiseness, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Aviad-Levitzky, Tami; Laufer, Batia; Goldstein, Zahava – Language Assessment Quarterly, 2019
This article describes the development and validation of the new CATSS (Computer Adaptive Test of Size and Strength), which measures vocabulary knowledge in four modalities -- productive recall, receptive recall, productive recognition, and receptive recognition. In the first part of the paper we present the assumptions that underlie the test --…
Descriptors: Foreign Countries, Test Construction, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Anderson, Daniel; Farley, Dan; Tindal, Gerald – Journal of Special Education, 2015
Students with significant cognitive disabilities present an assessment dilemma that centers on access and validity in large-scale testing programs. Typically, access is improved by eliminating construct-irrelevant barriers, while validity is improved, in part, through test standardization. In this article, one state's alternate assessment data…
Descriptors: Mental Retardation, Evaluation Methods, Student Evaluation, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yen, Yung-Chin; Ho, Rong-Guey; Liao, Wen-Wei; Chen, Li-Ju – Educational Technology & Society, 2012
In a test, the testing score would be closer to examinee's actual ability when careless mistakes were corrected. In CAT, however, changing the answer of one item in CAT might cause the following items no longer appropriate for estimating the examinee's ability. These inappropriate items in a reviewable CAT might in turn introduce bias in ability…
Descriptors: Foreign Countries, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris – Florida Center for Reading Research, 2015
The FAIR-FS consists of computer-adaptive reading comprehension and oral language screening tasks that provide measures to track growth over time, as well as a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 40th percentile) on the reading comprehension subtest of the Stanford Achievement Test (SAT-10) in the…
Descriptors: Reading Instruction, Screening Tests, Reading Comprehension, Oral Language
Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium designed to create next-generation assessments that, compared to traditional K-12 assessments, more accurately measure student progress toward college and career readiness. The PARCC assessments are aligned to the Common Core State Standards…
Descriptors: Standardized Tests, Career Readiness, College Readiness, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Lane, Suzanne; Leventhal, Brian – Review of Research in Education, 2015
This chapter addresses the psychometric challenges in assessing English language learners (ELLs) and students with disabilities (SWDs). The first section addresses some general considerations in the assessment of ELLs and SWDs, including the prevalence of ELLs and SWDs in the student population, federal and state legislation that requires the…
Descriptors: Psychometrics, Evaluation Problems, English Language Learners, Disabilities
Peer reviewed Peer reviewed
Direct linkDirect link
Kettler, Ryan J. – Review of Research in Education, 2015
This chapter introduces theory that undergirds the role of testing adaptations in assessment, provides examples of item modifications and testing accommodations, reviews research relevant to each, and introduces a new paradigm that incorporates opportunity to learn (OTL), academic enablers, testing adaptations, and inferences that can be made from…
Descriptors: Meta Analysis, Literature Reviews, Testing, Testing Accommodations
Peer reviewed Peer reviewed
Direct linkDirect link
Forbey, Johnathan D.; Ben-Porath, Yossef S. – Psychological Assessment, 2007
Computerized adaptive testing in personality assessment can improve efficiency by significantly reducing the number of items administered to answer an assessment question. Two approaches have been explored for adaptive testing in computerized personality assessment: item response theory and the countdown method. In this article, the authors…
Descriptors: Personality Traits, Computer Assisted Testing, Test Validity, Personality Assessment
PDF pending restoration PDF pending restoration
Wheeler, Patricia H. – 1995
When individuals are given tests that are too hard or too easy, the resulting scores are likely to be poor estimates of their performance. To get valid and accurate test scores that provide meaningful results, one should use functional-level testing (FLT). FLT is the practice of administering to an individual a version of a test with a difficulty…
Descriptors: Adaptive Testing, Difficulty Level, Educational Assessment, Performance
Peer reviewed Peer reviewed
Luecht, Richard M. – Applied Psychological Measurement, 1996
The example of a medical licensure test is used to demonstrate situations in which complex, integrated content must be balanced at the total test level for validity reasons, but items assigned to reportable subscore categories may be used under a multidimensional item response theory adaptive paradigm to improve subscore reliability. (SLD)
Descriptors: Adaptive Testing, Certification, Computer Assisted Testing, Licensing Examinations (Professions)
Bock, R. Darrell; Zimowski, Michele F. – National Center for Education Statistics, 2003
This report examines the potential of adaptive testing, two?-stage testing in particular, for improving the data quality of the National Assessment of Educational Progress (NAEP). Following a discussion of the rationale for adaptive testing in assessment and a review of previous studies of two-?stage testing, this report describes a 1993 Ohio…
Descriptors: National Competency Tests, Test Validity, Feasibility Studies, Educational Assessment
Hambleton, Ronald K.; And Others – 1975
The success of objectives-based programs depends to a considerable extent on how effectively students and teachers assess mastery of objectives and make decisions for future instruction. While educators disagree on the usefulness of criterion-referenced tests the position taken in this monograph is that criterion-referenced tests are useful, and…
Descriptors: Adaptive Testing, Course Objectives, Criterion Referenced Tests, Individualized Instruction
Educational Testing Service, Princeton, NJ. – 1977
The 1976 Educational Testing Service (ETS) Invitational Conference served as a platform for individuals who have been prominent in educational measurement and research to present their views on issues surrounding the testing controversy. The 1976 ETS "The Testing Scene: Chaos and Controversy," presents a historical review of events surrounding the…
Descriptors: Achievement Tests, Adaptive Testing, Awards, Career Development