NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)7
Audience
Location
Kansas1
Maryland1
Laws, Policies, & Programs
Individuals with Disabilities…1
Assessments and Surveys
Graduate Record Examinations1
What Works Clearinghouse Rating
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Jooyong – British Journal of Educational Technology, 2010
The newly developed computerized Constructive Multiple-choice Testing system is introduced. The system combines short answer (SA) and multiple-choice (MC) formats by asking examinees to respond to the same question twice, first in the SA format, and then in the MC format. This manipulation was employed to collect information about the two…
Descriptors: Grade 5, Evaluation Methods, Multiple Choice Tests, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Hardre, Patricia L.; Crowson, H. Michael; Xie, Kui – Journal of Educational Computing Research, 2010
Questionnaire instruments are routinely translated to digital administration systems; however, few studies have compared the differential effects of these administrative methods, and fewer yet in authentic contexts-of-use. In this study, 326 university students were randomly assigned to one of two administration conditions, paper-based (PBA) or…
Descriptors: Internet, Computer Assisted Testing, Questionnaires, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Bottge, Brian A.; Rueda, Enrique; Kwon, Jung Min; Grant, Timothy; LaRoque, Perry – Educational Technology Research and Development, 2009
The purpose of this randomized experiment was to compare the performance of high-, average-, and low-achieving middle school students who were assessed with parallel versions of a computer-based test (CBT) or a paper-pencil test (PPT). Tests delivered in interactive, immersive environments like the CBT may have the advantage of providing teachers…
Descriptors: Teaching Methods, Problem Solving, Middle School Students, Mathematics Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Poggio, John; Glasnapp, Douglas R.; Yang, Xiangdong; Poggio, Andrew J. – Journal of Technology, Learning, and Assessment, 2005
The present study reports results from a quasi-controlled empirical investigation addressing the impact on student test scores when using fixed form computer based testing (CBT) versus paper and pencil (P&P) testing as the delivery mode to assess student mathematics achievement in a state's large scale assessment program. Grade 7 students…
Descriptors: Mathematics Achievement, Measures (Individuals), Program Effectiveness, Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dolan, Robert P.; Hall, Tracey E.; Banerjee, Manju; Chun, Euljung; Strangman, Nicole – Journal of Technology, Learning, and Assessment, 2005
Standards-based reform efforts are highly dependent on accurate assessment of all students, including those with disabilities. The accuracy of current large-scale assessments is undermined by construct-irrelevant factors including access barriers, a particular problem for students with disabilities. Testing accommodations such as the read-aloud…
Descriptors: United States History, Testing Accommodations, Test Content, Learning Disabilities