NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)0
Since 2007 (last 20 years)4
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Laprise, Shari L. – College Teaching, 2012
Successful exam composition can be a difficult task. Exams should not only assess student comprehension, but be learning tools in and of themselves. In a biotechnology course delivered to nonmajors at a business college, objective multiple-choice test questions often require students to choose the exception or "not true" choice. Anecdotal student…
Descriptors: Feedback (Response), Test Items, Multiple Choice Tests, Biotechnology
Foy, Pierre, Ed.; Arora, Alka, Ed.; Stanco, Gabrielle M., Ed. – International Association for the Evaluation of Educational Achievement, 2013
The TIMSS 2011 International Database includes data for all questionnaires administered as part of the TIMSS 2011 assessment. This supplement contains the international version of the TIMSS 2011 background questionnaires and curriculum questionnaires in the following 10 sections: (1) Fourth Grade Student Questionnaire; (2) Fourth Grade Home…
Descriptors: Background, Questionnaires, Test Items, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Kato, Kentaro; Moen, Ross E.; Thurlow, Martha L. – Educational Measurement: Issues and Practice, 2009
Large data sets from a state reading assessment for third and fifth graders were analyzed to examine differential item functioning (DIF), differential distractor functioning (DDF), and differential omission frequency (DOF) between students with particular categories of disabilities (speech/language impairments, learning disabilities, and emotional…
Descriptors: Learning Disabilities, Language Impairments, Behavior Disorders, Affective Behavior
Peer reviewed Peer reviewed
Direct linkDirect link
Threlfall, John; Pool, Peter; Homer, Matthew; Swinnerton, Bronwen – Educational Studies in Mathematics, 2007
This article explores the effect on assessment of "translating" paper and pencil test items into their computer equivalents. Computer versions of a set of mathematics questions derived from the paper-based end of key stage 2 and 3 assessments in England were administered to age appropriate pupil samples, and the outcomes compared.…
Descriptors: Test Items, Student Evaluation, Foreign Countries, Test Validity
Peer reviewed Peer reviewed
Beaton, Albert E.; Allen, Nancy L. – Journal of Educational Statistics, 1992
The National Assessment of Educational Progress (NAEP) makes possible comparison of groups of students and provides information about what these groups know and can do. The scale anchoring techniques described in this chapter address the latter purpose. The direct method and the smoothing method of scale anchoring are discussed. (SLD)
Descriptors: Comparative Testing, Educational Assessment, Elementary Secondary Education, Knowledge Level
Peer reviewed Peer reviewed
Yamamoto, Kentaro; Mazzeo, John – Journal of Educational Statistics, 1992
The need for scale linking in the National Assessment of Educational Progress (NAEP) is discussed, and the specific procedures used to carry out the linking in the context of the major analyses of the 1990 NAEP mathematics assessment are described. Issues remaining to be addressed are outlined. (SLD)
Descriptors: Comparative Testing, Educational Assessment, Elementary Secondary Education, Equated Scores
Lett, Nancy J.; Kamphaus, Randy W. – 1992
Results of the Behavior Assessment System for Children (BASC) Student Observation Scale (SOS), a measure of classroom behavior, were correlated with results of the BASC Teacher Rating Scale (TRS). Two classroom observations were made of each of 30 students (21 males and 9 females) aged 5 to 11 years. Teachers of those students completed the TRS.…
Descriptors: Children, Classroom Observation Techniques, Classroom Research, Comparative Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Peer reviewed Peer reviewed
Jones, Allan – Journal of Geography in Higher Education, 1997
Examines the increase in popularity of objective testing in the United Kingdom and addresses some of the accompanying academic issues. Reports on a case study of test production and implementation to illustrate issues of time costs and benefits. Discusses question styles, marking schemes, and the problem of guesswork. (MJP)
Descriptors: Comparative Testing, Educational Practices, Educational Trends, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness