NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jancarík, Antonín; Kostelecká, Yvona – Electronic Journal of e-Learning, 2015
Electronic testing has become a regular part of online courses. Most learning management systems offer a wide range of tools that can be used in electronic tests. With respect to time demands, the most efficient tools are those that allow automatic assessment. The presented paper focuses on one of these tools: matching questions in which one…
Descriptors: Online Courses, Computer Assisted Testing, Test Items, Scoring Formulas
Plake, Barbara S.; And Others – 1980
Number right and elimination scores were analyzed on a 48-item college level mathematics test that was assembled from pretest data in three forms by varying the item orderings: easy-hard, uniform, or random. Half of the forms contained information explaining the item arrangement and suggesting strategies for taking the test. Several anxiety…
Descriptors: Difficulty Level, Higher Education, Multiple Choice Tests, Quantitative Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Burton, Richard F. – Assessment & Evaluation in Higher Education, 2004
The standard error of measurement usefully provides confidence limits for scores in a given test, but is it possible to quantify the reliability of a test with just a single number that allows comparison of tests of different format? Reliability coefficients do not do this, being dependent on the spread of examinee attainment. Better in this…
Descriptors: Multiple Choice Tests, Error of Measurement, Test Reliability, Test Items
Lowry, Stephen R. – 1979
A specially designed answer format was used for three tests in a college level agriculture class of 19 students to record responses to three things about each item: (1) the student's choice of the best answer; (2) the degree of certainty with which the answer was chosen; and (3) all the answer choices which the student was certain were incorrect.…
Descriptors: Achievement Tests, Confidence Testing, Guessing (Tests), Higher Education
Lawrence, Ida M.; Schmidt, Amy Elizabeth – College Entrance Examination Board, 2001
The SAT® I: Reasoning Test is administered seven times a year. Primarily for security purposes, several different test forms are given at each administration. How is it possible to compare scores obtained from different test forms and from different test administrations? The purpose of this paper is to provide an overview of the statistical…
Descriptors: Scores, Comparative Analysis, Standardized Tests, College Entrance Examinations
Lenel, Julia C.; Gilmer, Jerry S. – 1986
In some testing programs an early item analysis is performed before final scoring in order to validate the intended keys. As a result, some items which are flawed and do not discriminate well may be keyed so as to give credit to examinees no matter which answer was chosen. This is referred to as allkeying. This research examined how varying the…
Descriptors: Equated Scores, Item Analysis, Latent Trait Theory, Licensing Examinations (Professions)
Rippey, Robert M. – 1971
Technical improvements, which may be made in the reliability and validity of tests through confidence scores, are discussed. However, studies indicate that subjects do not handle their confidence uniformly. (MS)
Descriptors: Computer Programs, Confidence Testing, Correlation, Difficulty Level