NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,766 to 3,780 of 4,790 results Save | Export
Peer reviewed Peer reviewed
Fitzgerald, Thomas P.; Fitzgerald, Ellen F. – Educational Research Quarterly, 1978
This study investigated the differential performance of subjects across cultures (U.S. and Ireland); grade levels (grades 2, 3, and 4); and three test formats (multiple-choice-cloze, maze, and cloze). Recognition test formats produced higher scores than the cloze format. Cultural influences were also reported. (Author/GDC)
Descriptors: Cloze Procedure, Cross Cultural Studies, Cultural Influences, Elementary Education
Peer reviewed Peer reviewed
Cross, Lawrence; Frary, Robert – Journal of Educational Measurement, 1977
Corrected-for-guessing scores on multiple-choice tests depend upon the ability and willingness of examinees to guess when they have some basis for answering, and to avoid guessing when they have no basis. The present study determined the extent to which college students were able and willing to comply with formula-scoring directions. (Author/CTM)
Descriptors: Guessing (Tests), Higher Education, Individual Characteristics, Multiple Choice Tests
Peer reviewed Peer reviewed
Traub, Ross E.; Fisher, Charles W. – Applied Psychological Measurement, 1977
Two sets of mathematical reasoning and two sets of verbal comprehension items were cast into each of three formats--constructed response, standard multiple-choice, and Coombs multiple-choice--in order to assess whether tests with identical content but different formats measure the same attribute. (Author/CTM)
Descriptors: Comparative Testing, Confidence Testing, Constructed Response, Factor Analysis
Newsom, Robert S.; And Others – Evaluation Quarterly, 1978
For the training and placement of professional workers, multiple-choice instruments are the norm for wide-scale measurement and evaluation efforts. These instruments contain fundamental problems. Computer-based management simulations may provide solutions to these problems, appear scoreable and reliable, offer increased validity, and are better…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Occupational Tests, Personnel Evaluation
Peer reviewed Peer reviewed
Huynh, Huynh; Casteel, Jim – Journal of Experimental Education, 1987
In the context of pass/fail decisions, using the Bock multi-nominal latent trait model for moderate-length tests does not produce decisions that differ substantially from those based on the raw scores. The Bock decisions appear to relate less strongly to outside criteria than those based on the raw scores. (Author/JAZ)
Descriptors: Cutting Scores, Error Patterns, Grade 6, Intermediate Grades
Peer reviewed Peer reviewed
Joycey, E. – System, 1987
Among techniques which can be used by foreign language teachers to help learners use multiple-choice tests (after reading a text) to become better readers are: have students attempt to answer questions before reading the text; rearrange the order of the questions; and have students make up multiple choice questions. (CB)
Descriptors: Classroom Techniques, Language Teachers, Multiple Choice Tests, Reading Comprehension
Peer reviewed Peer reviewed
Lederman, Marie Jean – Journal of Basic Writing, 1988
Explores the history of testing, motivations for testing, testing procedures, and the inevitable limitations of testing. Argues that writing program faculty and administrators must clarify and profess their values, decide what they want students to know and what sort of thinkers they should be, and develop tests reflecting those needs. (SR)
Descriptors: Educational Objectives, Educational Testing, Essay Tests, Multiple Choice Tests
Peer reviewed Peer reviewed
Norcini, John J.; And Others – Evaluation and the Health Professions, 1986
This study compares physician performance on the Computer-Aided Simulation of the Clinical Encounter with peer ratings and performance on multiple choice questions and patient management problems. Results indicate that all formats are equally valid, although multiple choice is the most reliable method of assessment per unit of testing time.…
Descriptors: Certification, Competence, Computer Assisted Testing, Computer Simulation
Peer reviewed Peer reviewed
Grosse, Martin E. – Evaluation and the Health Professions, 1986
Scores based on the number of correct answers were compared with scores based on dangerous responses to items in the same multiple choice test developed by American Board of Orthopaedic Surgery. Results showed construct validity for both sets of scores. However, both scores were redundant when evaluated by correlation coefficient. (Author/JAZ)
Descriptors: Certification, Construct Validity, Correlation, Foreign Countries
Brightman, Harvey J.; And Others – Educational Technology, 1984
Describes the development and evaluation of interactive computer-based formative tests containing multiple choice questions based on Bloom's taxonomy and their use in a core-level higher education business statistics course prior to graded examinations to determine where students are experiencing difficulties. (MBR)
Descriptors: Cognitive Objectives, Computer Assisted Testing, Computer Software, Diagnostic Tests
Choppin, Bruce – Evaluation in Education: An International Review Series, 1985
During 1969 the International Association for the Evaluation of Educational Achievement began a series of cross-cultural studies to investigate the workings of multiple-choice achievement tests and student guessing behaviors. Empirical models to correct for guessing are discussed in terms of test item difficulty, number of response choices,…
Descriptors: Achievement Tests, Cross Cultural Studies, Educational Testing, Guessing (Tests)
Peer reviewed Peer reviewed
Mason, Victor W. – System, 1984
Discusses the effectiveness of Kuwait University Language Center Placement Tests in promoting homogeneity of class ability levels. Looks at the appropriateness of multiple-choice tests of grammar, vocabulary, and reading comprehension for placement and diagnostic purposes in large programs. Concludes that carefully designed and written…
Descriptors: Ability Grouping, English (Second Language), English for Special Purposes, Grouping (Instructional Purposes)
Peer reviewed Peer reviewed
Hammerly, Hector; Colhoun, Edward R. – Hispania, 1984
Results of a rational, multiple-choice, cloze Spanish achievement test indicate the validity of such a test to measure achievement. It is suggested that with only minor adaptations, such a test could be used as a way to measure macro- and microprogress with specific materials at any point in a course or program where reading may be tested. (SL)
Descriptors: Academic Achievement, Achievement Tests, Cloze Procedure, Higher Education
Bryce, Jennifer; And Others – Programmed Learning and Educational Technology, 1983
Describes development of a test using slides and corresponding multiple-choice questions for second-year occupational therapy students in child studies course. Reasons for choosing the test format are discussed and an outline of test construction procedures is given. An evaluation of the test indicates problems encountered and benefits gained.…
Descriptors: Child Development, Criterion Referenced Tests, Foreign Countries, Higher Education
Peer reviewed Peer reviewed
Hirvonen, P. A. – System, 1977
Defends the use of multiple-choice language tests against Pickering's criticism in a previous issue of this journal. (CHK)
Descriptors: Aptitude Tests, Language Instruction, Language Tests, Multiple Choice Tests
Pages: 1  |  ...  |  248  |  249  |  250  |  251  |  252  |  253  |  254  |  255  |  256  |  ...  |  320