NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Individuals with Disabilities…1
Assessments and Surveys
Measures of Academic Progress1
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Nikola Ebenbeck; Markus Gebhardt – Journal of Special Education Technology, 2024
Technologies that enable individualization for students have significant potential in special education. Computerized Adaptive Testing (CAT) refers to digital assessments that automatically adjust their difficulty level based on students' abilities, allowing for personalized, efficient, and accurate measurement. This article examines whether CAT…
Descriptors: Computer Assisted Testing, Students with Disabilities, Special Education, Grade 3
Peer reviewed Peer reviewed
Direct linkDirect link
Gilbert, Joshua B.; Kim, James S.; Miratrix, Luke W. – Journal of Educational and Behavioral Statistics, 2023
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing heterogeneous treatment effects (HTE) fail to address the HTE that may exist "within" outcome measures. In…
Descriptors: Test Items, Item Response Theory, Computer Assisted Testing, Program Effectiveness
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Solano-Flores, Guillermo; Shyyan, Vitaliy; Chía, Magda; Kachchaf, Rachel – International Multilingual Research Journal, 2023
We examined semiotic exchangeability in pop-up glossary translations and illustrations used as supports for second language learners (SLLs) in computer-administered mathematics tests. In a sample of 516 mathematics items, Grades 3-8 and 11, from a large-scale assessment program in the US, test developers identified terms that could be translated…
Descriptors: Mathematics Tests, Testing Accommodations, Test Items, Semiotics
Li, Sylvia; Meyer, Patrick – NWEA, 2019
This simulation study examines the measurement precision, item exposure rates, and the depth of the MAP® Growth™ item pools under various grade-level restrictions. Unlike most summative assessments, MAP Growth allows examinees to see items from any grade level, regardless of the examinee's actual grade level. It does not limit the test to items…
Descriptors: Achievement Tests, Item Banks, Test Items, Instructional Program Divisions
Peer reviewed Peer reviewed
Direct linkDirect link
Wei, Hua; Lin, Jie – International Journal of Testing, 2015
Out-of-level testing refers to the practice of assessing a student with a test that is intended for students at a higher or lower grade level. Although the appropriateness of out-of-level testing for accountability purposes has been questioned by educators and policymakers, incorporating out-of-level items in formative assessments for accurate…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Instructional Program Divisions
Liu, Junhui; Brown, Terran; Chen, Jianshen; Ali, Usama; Hou, Likun; Costanzo, Kate – Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium working to develop next-generation assessments that more accurately, compared to previous assessments, measure student progress toward college and career readiness. The PARCC assessments include both English Language Arts/Literacy (ELA/L) and…
Descriptors: Testing, Achievement Tests, Test Items, Test Bias
Steedle, Jeffrey; McBride, Malena; Johnson, Marc; Keng, Leslie – Partnership for Assessment of Readiness for College and Careers, 2016
The first operational administration of the Partnership for Assessment of Readiness for College and Careers (PARCC) took place during the 2014-2015 school year. In addition to the traditional paper-and-pencil format, the assessments were available for administration on a variety of electronic devices, including desktop computers, laptop computers,…
Descriptors: Computer Assisted Testing, Difficulty Level, Test Items, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Albano, Anthony D. – Applied Measurement in Education, 2015
This article used several data sets from a large-scale state testing program to examine the feasibility of combining general and modified assessment items in computerized adaptive testing (CAT) for different groups of students. Results suggested that several of the assumptions made when employing this type of mixed-item CAT may not be met for…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Testing Programs
Alonzo, Julie; Anderson, Daniel; Park, Bitnara Jasmine; Tindal, Gerald – Behavioral Research and Teaching, 2012
In this technical report, we describe the development and piloting of a series of vocabulary assessments intended for use with students in grades two through eight. These measures, available as part of easyCBM[TM], an online progress monitoring and benchmark/screening assessment system, were developed in 2010 and administered to approximately 1200…
Descriptors: Curriculum Based Assessment, Vocabulary, Language Tests, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Flowers, Claudia; Kim, Do-Hong; Lewis, Preston; Davis, Violeta Carmen – Journal of Special Education Technology, 2011
This study examined the academic performance and preference of students with disabilities for two types of test administration conditions, computer-based testing (CBT) and pencil-and-paper testing (PPT). Data from a large-scale assessment program were used to examine differences between CBT and PPT academic performance for third to eleventh grade…
Descriptors: Testing, Test Items, Effect Size, Computer Assisted Testing
Alonzo, Julie; Lai, Cheng Fei; Tindal, Gerald – Behavioral Research and Teaching, 2009
In this technical report, we describe the development and piloting of a series of mathematics progress monitoring measures intended for use with students in grades kindergarten through eighth grade. These measures, available as part of easyCBM[TM], an online progress monitoring assessment system, were developed in 2007 and 2008 and administered to…
Descriptors: Grade 3, General Education, Response to Intervention, Access to Education