NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hastedt, Dirk; Desa, Deana – Practical Assessment, Research & Evaluation, 2015
This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs) such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average…
Descriptors: Case Studies, Simulation, International Programs, Testing Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Albano, Anthony D. – Applied Measurement in Education, 2015
This article used several data sets from a large-scale state testing program to examine the feasibility of combining general and modified assessment items in computerized adaptive testing (CAT) for different groups of students. Results suggested that several of the assumptions made when employing this type of mixed-item CAT may not be met for…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Testing Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrara, Steve; Svetina, Dubravka; Skucha, Sylvia; Davidson, Anne H. – Educational Measurement: Issues and Practice, 2011
Items on test score scales located at and below the Proficient cut score define the content area knowledge and skills required to achieve proficiency. Alternately, examinees who perform at the Proficient level on a test can be expected to be able to demonstrate that they have mastered most of the knowledge and skills represented by the items at…
Descriptors: Knowledge Level, Mathematics Tests, Program Effectiveness, Inferences
Yen, Shu Jing; Ochieng, Charles; Michaels, Hillary; Friedman, Greg – Online Submission, 2005
Year-to-year rater variation may result in constructed response (CR) parameter changes, making CR items inappropriate to use in anchor sets for linking or equating. This study demonstrates how rater severity affected the writing and reading scores. Rater adjustments were made to statewide results using an item response theory (IRT) methodology…
Descriptors: Test Items, Writing Tests, Reading Tests, Measures (Individuals)