NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Karagöl, Efecan – Journal of Language and Linguistic Studies, 2020
Turkish and Foreign Languages Research and Application Center (TÖMER) is one of the important institutions for learning Turkish as a foreign language. In these institutions, proficiency tests are applied at the end of each level. However, test applications in TÖMERs vary between each center as there is no shared program in teaching Turkish as a…
Descriptors: Language Tests, Turkish, Language Proficiency, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Camilli, Gregory – Educational Research and Evaluation, 2013
In the attempt to identify or prevent unfair tests, both quantitative analyses and logical evaluation are often used. For the most part, fairness evaluation is a pragmatic attempt at determining whether procedural or substantive due process has been accorded to either a group of test takers or an individual. In both the individual and comparative…
Descriptors: Alternative Assessment, Test Bias, Test Content, Test Format
Rubin, Lois S.; Mott, David E. W. – 1984
An investigation of the effect on the difficulty value of an item due to position placement within a test was made. Using a 60-item operational test comprised of 5 subtests, 60 items were placed as experimental items on a number of spiralled test forms in three different positions (first, middle, last) within the subtest composed of like items.…
Descriptors: Difficulty Level, Item Analysis, Minimum Competency Testing, Reading Tests
Siskind, Theresa G.; Anderson, Lorin W. – 1982
The study was designed to examine the similarity of response options generated by different item writers using a systematic approach to item writing. The similarity of response options to student responses for the same item stems presented in an open-ended format was also examined. A non-systematic (subject matter expertise) approach and a…
Descriptors: Algorithms, Item Analysis, Multiple Choice Tests, Quality Control
Klein, Stephen P.; Bolus, Roger – 1983
A solution to reduce the likelihood of one examinee copying another's answers on large scale tests that require all examinees to answer the same set of questions is to use multiple test forms that differ in terms of item ordering. This study was conducted to determine whether varying the sequence in which blocks of items were presented to…
Descriptors: Adults, Cheating, Cost Effectiveness, Item Analysis
Peer reviewed Peer reviewed
Gohmann, Stephan F.; Spector, Lee C. – Journal of Economic Education, 1989
Compares the effect of content ordering and scrambled ordering on examinations in courses, such as economics, that require quantitative skills. Empirical results suggest that students do no better if they are given a content-ordered rather than a scrambled examination as student performance is not adversely affected by scrambled ordered…
Descriptors: Cheating, Economics Education, Educational Research, Grading
Wood, Robert – Evaluation in Education: International Progress, 1977
The author surveys literature and practice, primarily in Great Britain and the United States, about multiple-choice testing, comments on criticisms, and defends the state of the art. Varous item types, item writing, test instructions and scoring formulas, item analysis, and test construction are discussed. An extensive bibliography is appended.…
Descriptors: Achievement Tests, Item Analysis, Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Bresnock, Anne E.; And Others – Journal of Economic Education, 1989
Investigates the effects on multiple choice test performance of altering the order and placement of questions and responses. Shows that changing the response pattern appears to alter significantly the apparent degree of difficulty. Response patterns become more dissimilar under certain types of response alterations. (LS)
Descriptors: Cheating, Economics Education, Educational Research, Grading
Lance, Charles E.; Moomaw, Michael E. – 1983
Direct assessments of the accuracy with which raters can use a rating instrument are presented. This study demonstrated how surplus behavioral incidents scaled during the development of Behaviorally Anchored Rating Scales (BARS) can be used effectively in the evaluation of the newly developed scales. Construction of scenarios of hypothetical…
Descriptors: Behavior Rating Scales, Comparative Analysis, Error of Measurement, Evaluation Criteria
Jacobs, Lucy Cheser; Chase, Clinton I. – 1992
This book offers specific how-to advice to college faculty on every stage of the testing process, including planning the test and classifying objectives to be measured, ensuring the validity and reliability of the test, and grading in such a way as to arrive at fair grades based on relevant data. The book examines the strengths and weaknesses of…
Descriptors: Cheating, College Faculty, Comparative Analysis, Computer Assisted Testing
Oaster, T. R. F.; And Others – 1986
This study hypothesized that items in the one-question-per-passage format would be less easily answered when administered without their associated contexts than conventional reading comprehension items. A total of 256 seventh and eighth grade students were administered both Forms 3A and 3B of the Sequential Tests of Educational Progress (STEP 11).…
Descriptors: Context Effect, Difficulty Level, Grade 7, Grade 8
O'Neill, Kathleen A. – 1986
When test questions are not intended to measure language skills, it is important to know if language is an extraneous characteristic that affects item performance. This study investigates whether certain stylistic changes in the way items are presented affect item performance on examinations for a health profession. The subjects were medical…
Descriptors: Abbreviations, Analysis of Variance, Drug Education, Graduate Medical Students
Huntley, Renee M.; Welch, Catherine J. – 1993
Writers of mathematics test items, especially those who write for standardized tests, are often advised to arrange the answer options in logical order, usually ascending or descending numerical order. In this study, 32 mathematics items were selected for inclusion in four experimental pretest units, each consisting of 16 items. Two versions…
Descriptors: Ability, College Entrance Examinations, Comparative Testing, Distractors (Tests)
Owen, K. – 1989
Sources of item bias located in characteristics of the test item were studied in a reasoning test developed in South Africa. Subjects were 1,056 White, 1,063 Indian, and 1,093 Black students from standard 7 in Afrikaans and English schools. Format and content of the 85-item Reasoning Test were manipulated to obtain information about bias or…
Descriptors: Afrikaans, Black Students, Cognitive Tests, Comparative Testing
Scheuneman, Janice – 1985
A number of hypotheses were tested concerning elements of Graduate Record Examinations (GRE) items that might affect the performance of blacks and whites differently. These elements were characteristics common to several items that otherwise measured different concepts. Seven general hypotheses were tested in the form of sixteen specific…
Descriptors: Black Students, College Entrance Examinations, Graduate Study, Higher Education