NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Russell W.; Davis-Becker, Susan L.; O'Leary, Lisa S. – Journal of Applied Testing Technology, 2014
This article describes a hybrid standard setting method that combines characteristics of the Angoff (1971) and Bookmark (Mitzel, Lewis, Patz & Green, 2001) methods. The proposed approach utilizes strengths of each method while addressing weaknesses. An ordered item booklet, with items sorted based on item difficulty, is used in combination…
Descriptors: Standard Setting, Difficulty Level, Test Items, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Camara, Wayne – Journal of Applied Testing Technology, 2009
The five papers in this special issue of the "Journal of Applied Testing Technology" address fundamental issues of validity when tests are modified or accommodations are provided to English Language Learners (ELL) or students with disabilities. Three papers employed differential item functioning (DIF) and factor analysis and found the…
Descriptors: Second Language Learning, Factor Analysis, English (Second Language), Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Steinberg, Jonathan; Cline, Frederick; Ling, Guangming; Cook, Linda; Tognatta, Namrata – Journal of Applied Testing Technology, 2009
This study examines the appropriateness of a large-scale state standards-based English-Language Arts (ELA) assessment for students who are deaf or hard of hearing by comparing the internal test structures for these students to students without disabilities. The Grade 4 and 8 ELA assessments were analyzed via a series of parcel-level exploratory…
Descriptors: Test Bias, Language Arts, State Standards, Partial Hearing
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Yongwei; Buckendahl, Chad W.; Juszkiewicz, Piotr J.; Bhola, Dennison S. – Journal of Applied Testing Technology, 2005
With the continual progress of computer technologies, computer automated scoring (CAS) has become a popular tool for evaluating writing assessments. Research of applications of these methodologies to new types of performance assessments is still emerging. While research has generally shown a high agreement of CAS system generated scores with those…
Descriptors: Scoring, Validity, Interrater Reliability, Comparative Analysis