NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)2
Audience
Researchers15
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 1 to 15 of 62 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Horák, Tania; Gandini, Elena – Research-publishing.net, 2019
This paper reports on the proposed transfer of a paper-based English proficiency exam to an online platform. We discuss both the potential predetermined advantages, which were the impetus for the project, and also some emergent benefits, which prompted an in-depth analysis and reconceptualisation of the exam's role, which in turn we hope will…
Descriptors: Second Language Learning, Second Language Instruction, Feedback (Response), Computer Assisted Testing
Wielicki, Tom – International Association for Development of the Information Society, 2016
This paper reports on longitudinal study regarding integrity of testing in an online format as used by e-learning platforms. Specifically, this study explains whether online testing, which implies an open book format is compromising integrity of assessment by encouraging cheating among students. Statistical experiment designed for this study…
Descriptors: Integrity, Online Courses, Statistical Surveys, Longitudinal Studies
Hambleton, Ronald K.; Simon, Robert A. – 1980
The subject of constructing criterion-referenced tests is often researched, but many technical problems remain to be satisfactorily resolved. Foremost, criterion-referenced test developers need a comprehensive set of steps for construction. In this paper, 14 logical steps for building criterion-referenced tests that refer to several different…
Descriptors: Criterion Referenced Tests, Cutting Scores, Guidelines, Scoring
Ferguson, William F. – 1983
College undergraduates (n=38) were administered identical multiple choice tests with randomly presented answer-sheets numbered either vertically or horizontally. Of the originally-scheduled four tests during the semester, tests one and three were retested with entirely different test questions, also multiple choice, resulting in scores from tests,…
Descriptors: Answer Sheets, Cheating, Higher Education, Multiple Choice Tests
Plake, Barbara S.; And Others – 1983
Differential test performance by undergraduate males and females enrolled in a developmental educational psychology course (n=167) was reported on a quantitative examination as a function of item arrangement. Males were expected to perform better than females on tests whose items arranged easy to hard. Plake and Ansorge (1982) speculated this may…
Descriptors: Difficulty Level, Feedback, Higher Education, Scoring
Peer reviewed Peer reviewed
Woodard, John L.; Axelrod, Bradley N. – Psychological Assessment, 1995
Using 308 patients referred for neuropsychological evaluation, 2 regression equations were developed to predict weighted raw score sums for General Memory and Delayed Recall using the Wechsler Memory Scale-Revised (WMS-R) analogs of 5 subtests from the original WMS. The equations may help reduce WMS-R administration time. (SLD)
Descriptors: Equations (Mathematics), Memory, Neuropsychology, Patients
Edwards, John; McCombie, Randy – 1983
The major purpose of the three studies reported here was to investigate possible differences in agreement/disagreement with attitude statements as a function of their type (with regard to positivity/negativity) and personalism. In the first study, 90 students completed scales on energy conservation and on having good study habits. Agreement varied…
Descriptors: Attitude Measures, Higher Education, Response Style (Tests), Semantic Differential
Lee, Jo Ann; And Others – 1984
The difficulty of test items administered by paper and pencil were compared with the difficulty of the same items administered by computer. The study was conducted to determine if an interaction exists between mode of test administration and ability. An arithmetic reasoning test was constructed for this study. All examinees had taken the Armed…
Descriptors: Adults, Comparative Analysis, Computer Assisted Testing, Difficulty Level
Rubin, Lois S.; Mott, David E. W. – 1984
An investigation of the effect on the difficulty value of an item due to position placement within a test was made. Using a 60-item operational test comprised of 5 subtests, 60 items were placed as experimental items on a number of spiralled test forms in three different positions (first, middle, last) within the subtest composed of like items.…
Descriptors: Difficulty Level, Item Analysis, Minimum Competency Testing, Reading Tests
Peer reviewed Peer reviewed
Downing, Steven M. – Educational Measurement: Issues and Practice, 1992
Research on true-false (TF), multiple-choice, and alternate-choice (AC) tests is reviewed, discussing strengths, weaknesses, and the usefulness in classroom and large-scale testing of each. Recommendations are made for improving use of AC items to overcome some of the problems associated with TF items. (SLD)
Descriptors: Comparative Analysis, Educational Research, Multiple Choice Tests, Objective Tests
Doss, David; Ligon, Glynn – 1985
Upon learning that a form of the Sequential Tests of Educational Progress was incorrectly distributed to an unidentified number of high school students along with an answer sheet pregridded with an alternate test form, the Austin Independent School District performed the following research analyses: (1) scored the tests using the key for each…
Descriptors: Educational Testing, Error of Measurement, Latent Trait Theory, Measurement Techniques
Slem, Charles M. – 1981
Over the years many criticisms have been offered against the multiple choice test format. Ambiguous, and emphasizing isolated information, they are also the most difficult objective tests to construct. Over-interpretation is a danger of multiple choice examinations with students picking subtle answers the test makers consider incorrect. Yet, the…
Descriptors: Constructed Response, Essay Tests, Higher Education, Multiple Choice Tests
Siskind, Theresa G.; Anderson, Lorin W. – 1982
The study was designed to examine the similarity of response options generated by different item writers using a systematic approach to item writing. The similarity of response options to student responses for the same item stems presented in an open-ended format was also examined. A non-systematic (subject matter expertise) approach and a…
Descriptors: Algorithms, Item Analysis, Multiple Choice Tests, Quality Control
Brittain, Mary M.; Brittain, Clay V. – 1981
A behavioral domain is well-defined when it is clear to both test developers and test users which categories of performance should or should not be considered for potential test items. Only those tests that are keyed to well-defined domains meet the definition of criterion-referenced tests. The greatest proliferation of criterion-referenced tests…
Descriptors: Criterion Referenced Tests, Reading Achievement, Reading Tests, Test Construction
Brown, Pamela J.; Augustine, Andy – 2001
Whether assessment items administered using screen reading software measure students learning better than assessment items in a paper-and-pencil format was studied. Using a computer to present a test orally controls for standardization of administration and allows each student to complete the assessment at his/her own pace. In this study, 96…
Descriptors: Academic Accommodations (Disabilities), Computer Software, Educational Testing, High School Students
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5