NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wildy, Helen; Styles, Irene – Australian Journal of Early Childhood, 2008
This paper reports analysis of 2006-2007 on-entry assessment data from the Performance Indicators in Primary Schools Baseline Assessment (PIPS-BLA) of random samples of students in England, Scotland, New Zealand and Australia. The analysis aimed, first, to investigate the validity and reliability of that instrument across countries and sexes, and,…
Descriptors: National Competency Tests, Foreign Countries, Student Evaluation, Comparative Education
Peer reviewed Peer reviewed
Direct linkDirect link
Threlfall, John; Pool, Peter; Homer, Matthew; Swinnerton, Bronwen – Educational Studies in Mathematics, 2007
This article explores the effect on assessment of "translating" paper and pencil test items into their computer equivalents. Computer versions of a set of mathematics questions derived from the paper-based end of key stage 2 and 3 assessments in England were administered to age appropriate pupil samples, and the outcomes compared.…
Descriptors: Test Items, Student Evaluation, Foreign Countries, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Thin, Alasdair G. – Bioscience Education e-Journal, 2006
It is not what is taught that has the most influence on students' study behaviour, but rather what is assessed. Computer-assisted assessment offers the possibility of widening the scope of the material that is assessed, without placing excessive burdens on either staff or students. This article describes a computer-assisted assessment scheme…
Descriptors: Physiology, Anatomy, Teaching Methods, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness