NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
Study Process Questionnaire1
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Santi Lestari – Research Matters, 2024
Despite the increasing ubiquity of computer-based tests, many general qualifications examinations remain in a paper-based mode. Insufficient and unequal digital provision across schools is often identified as a major barrier to a full adoption of computer-based exams for general qualifications. One way to overcome this barrier is a gradual…
Descriptors: Keyboarding (Data Entry), Handwriting, Test Format, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kieftenbeld, Vincent; Boyer, Michelle – Applied Measurement in Education, 2017
Automated scoring systems are typically evaluated by comparing the performance of a single automated rater item-by-item to human raters. This presents a challenge when the performance of multiple raters needs to be compared across multiple items. Rankings could depend on specifics of the ranking procedure; observed differences could be due to…
Descriptors: Automation, Scoring, Comparative Analysis, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
King, Rosemary; Blayney, Paul; Sweller, John – Accounting Education, 2021
This study offers evidence of the impact of language background on the performance of students enrolled in an accounting study unit. It aims to quantify the effects of language background on performance in essay questions, compared to calculation questions requiring an application of procedures. Marks were collected from 2850 students. The results…
Descriptors: Cognitive Ability, Accounting, Native Language, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tarun, Prashant; Krueger, Dale – Journal of Learning in Higher Education, 2016
In the United States System of Education the growth of student evaluations from 1973 to 1993 has increased from 29% to 86% which in turn has increased the importance of student evaluations on faculty retention, tenure, and promotion. However, the impact student evaluations have had on student academic development generates complex educational…
Descriptors: Critical Thinking, Teaching Methods, Multiple Choice Tests, Essay Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kadir, Abdul; Ardi, Muhammad; Nurhayati, B.; Dirawan, Gufran Darma – International Education Studies, 2016
The objective of this study was to examine the relationship of formative tests to early learning ability of students in the science learning style. This research used an experimental method with a 2 x 2 factorial design. The participants comprised all the students in class VII of the Islamic Junior High School State of Kolaka, a total of 343…
Descriptors: Foreign Countries, Junior High School Students, Formative Evaluation, Academic Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Tetteh, Godson Ayertei; Sarpong, Frederick Asafo-Adjei – Journal of International Education in Business, 2015
Purpose: The purpose of this paper is to explore the influence of constructivism on assessment approach, where the type of question (true or false, multiple-choice, calculation or essay) is used productively. Although the student's approach to learning and the teacher's approach to teaching are concepts that have been widely researched, few…
Descriptors: Foreign Countries, Outcomes of Education, Student Evaluation, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Pepple, Dagogo J.; Young, Lauriann E.; Carroll, Robert G. – Advances in Physiology Education, 2010
This retrospective study compared the performance of preclinical medical students in the multiple-choice question (MCQ) and long essay question components of a comprehensive physiology final examination. During the 3 yr analyzed, 307 students had an average score of 47% (SD 9.9) in the long essay questions and 64% (SD 9.9) in the MCQs. Regression…
Descriptors: Essay Tests, Multiple Choice Tests, Physiology, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Mogey, Nora; Paterson, Jessie; Burk, John; Purcell, Michael – ALT-J: Research in Learning Technology, 2010
Students at the University of Edinburgh do almost all their work on computers, but at the end of the semester they are examined by handwritten essays. Intuitively it would be appealing to allow students the choice of handwriting or typing, but this raises a concern that perhaps this might not be "fair"--that the choice a student makes,…
Descriptors: Handwriting, Essay Tests, Interrater Reliability, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Narloch, Rodger; Garbin, Calvin P.; Turnage, Kimberly D. – Teaching of Psychology, 2006
We investigated the use of quizzes administered prior to lecture (i.e., prelecture quizzes) and compared them to no-quiz control groups. In previous research, the success of administering quizzes after covering a topic (i.e., postlecture quizzes) was contingent on the quizzes and the subsequent exams being of similar level and content. However,…
Descriptors: Test Format, Lecture Method, Multiple Choice Tests, Essay Tests
Svinicki, Marilla; Koch, Bill – Innovation Abstracts, 1984
The decision of whether to use essay tests or multiple choice tests depends on several qualifiers related to the different characteristics of the tests and the needs of the situation. The most important qualifier involves matching the type of test to the instructional objectives being tested, with multiple choice tests being used to measure a…
Descriptors: Comparative Analysis, Essay Tests, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Swartz, Stephen M. – Journal of Education for Business, 2006
The confidence level (information-referenced testing; IRT) design is an attempt to improve upon the multiple choice format by allowing students to express a level of confidence in the answers they choose. In this study, the author evaluated student perceptions of the ease of use and accuracy of and general preference for traditional multiple…
Descriptors: Multiple Choice Tests, Essay Tests, Graduate Students, Student Attitudes
Zopp, William B. – 1998
The purpose of this study was to compare two different testing styles to see how they relate to retention of information. The study was conducted during the first semester of the 1996-97 school year at Greenbriar East High School in Lewisburg, West Virginia. The students in the study were in two Biology II classes, one with an enrollment of 26…
Descriptors: Academic Achievement, Biology, Comparative Analysis, Essay Tests
Peer reviewed Peer reviewed
Frary, Robert B. – Journal of Educational Measurement, 1985
Responses to a sample test were simulated for examinees under free-response and multiple-choice formats. Test score sets were correlated with randomly generated sets of unit-normal measures. The extent of superiority of free response tests was sufficiently small so that other considerations might justifiably dictate format choice. (Author/DWH)
Descriptors: Comparative Analysis, Computer Simulation, Essay Tests, Guessing (Tests)
Kemerer, Richard; Wahlstrom, Merlin – Performance and Instruction, 1985
Compares the features, learning outcomes tested, reliability, viability, and cost effectiveness of essay tests with those of interpretive tests used in training programs. A case study illustrating how an essay test was converted to an interpretive test and pilot tested is included to illustrate the advantages of interpretive testing. (MBR)
Descriptors: Case Studies, Comparative Analysis, Cost Effectiveness, Essay Tests
Peer reviewed Peer reviewed
Cirn, John T. – College Teaching, 1986
A comparison of the actual final grade distribution with what the grade distributions would have been had only the true/false components or the short-answer components been used is presented. The responses to a course evaluation survey that asked students to compare the two types of questions are summarized. (MLW)
Descriptors: College Students, Comparative Analysis, Constructed Response, Course Evaluation
Previous Page | Next Page ยป
Pages: 1  |  2