NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zehner, Fabian; Goldhammer, Frank; Lubaway, Emily; Sälzer, Christine – Education Inquiry, 2019
In 2015, the "Programme for International Student Assessment" (PISA) introduced multiple changes in its study design, the most extensive being the transition from paper- to computer-based assessment. We investigated the differences between German students' text responses to eight reading items from the paper-based study in 2012 to text…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Liu, Junhui; Brown, Terran; Chen, Jianshen; Ali, Usama; Hou, Likun; Costanzo, Kate – Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium working to develop next-generation assessments that more accurately, compared to previous assessments, measure student progress toward college and career readiness. The PARCC assessments include both English Language Arts/Literacy (ELA/L) and…
Descriptors: Testing, Achievement Tests, Test Items, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Flowers, Claudia; Kim, Do-Hong; Lewis, Preston; Davis, Violeta Carmen – Journal of Special Education Technology, 2011
This study examined the academic performance and preference of students with disabilities for two types of test administration conditions, computer-based testing (CBT) and pencil-and-paper testing (PPT). Data from a large-scale assessment program were used to examine differences between CBT and PPT academic performance for third to eleventh grade…
Descriptors: Testing, Test Items, Effect Size, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Do-Hong; Huynh, Huynh – Educational Assessment, 2010
This study investigated whether scores obtained from the online and paper-and-pencil administrations of the statewide end-of-course English test were equivalent for students with and without disabilities. Score comparability was evaluated by examining equivalence of factor structure (measurement invariance) and differential item and bundle…
Descriptors: Computer Assisted Testing, Language Tests, English, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Li-Ju; Ho, Rong-Guey; Yen, Yung-Chin – Educational Technology & Society, 2010
This study aimed to explore the effects of marking and metacognition-evaluated feedback (MEF) in computer-based testing (CBT) on student performance and review behavior. Marking is a strategy, in which students place a question mark next to a test item to indicate an uncertain answer. The MEF provided students with feedback on test results…
Descriptors: Feedback (Response), Test Results, Test Items, Testing