Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 3 |
Descriptor
Multiple Choice Tests | 10 |
Test Format | 10 |
Test Interpretation | 10 |
Test Construction | 5 |
Test Items | 4 |
Higher Education | 3 |
Undergraduate Students | 3 |
Achievement Tests | 2 |
Classroom Techniques | 2 |
Computer Assisted Testing | 2 |
Construct Validity | 2 |
More ▼ |
Source
Cognition and Instruction | 1 |
Educational and Psychological… | 1 |
Grantee Submission | 1 |
Journal of Effective Teaching… | 1 |
Theory and Research in… | 1 |
Author
Publication Type
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Audience
Practitioners | 2 |
Teachers | 2 |
Location
Canada | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Embedded Figures Test | 1 |
What Works Clearinghouse Rating
Joseph, Dane Christian – Journal of Effective Teaching in Higher Education, 2019
Multiple-choice testing is a staple within the U.S. higher education system. From classroom assessments to standardized entrance exams such as the GRE, GMAT, or LSAT, test developers utilize a variety of validated and heuristic driven item-writing guidelines. One such guideline that has been given recent attention is to randomize the position of…
Descriptors: Test Construction, Multiple Choice Tests, Guessing (Tests), Test Wiseness
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Ozuru, Yasuhiro; Best, Rachel; Bell, Courtney; Witherspoon, Amy; McNamara, Danielle S. – Cognition and Instruction, 2007
This study examines how passage availability and reading comprehension question format (open-ended vs. multiple-choice) influence question answering. In two experiments, college undergraduates read an expository passage and answered open-ended and multiple-choice versions of text-based, local, and global bridging inference questions. Half the…
Descriptors: Reading Comprehension, Expository Writing, Test Format, Questioning Techniques
Childs, Ruth Axman – 1989
Commercial achievement tests often provide limited instructional guidance and seldom provide feedback specific to any given classroom. The most instructionally relevant achievement tests are those developed by an individual teacher for use with a particular class. This digest describes the steps of test construction and presents suggestions for…
Descriptors: Achievement Tests, Behavioral Objectives, Classroom Techniques, Multiple Choice Tests

Plake, Barbara S.; Ansorge, Charles J. – Educational and Psychological Measurement, 1984
Scores representing number of items right and self-perceptions were analyzed for a nonquantitative examination that was assembled into three forms. Multivariate ANCOVA revealed no significant effects for the cognitive measure. However, significant sex and sex x order effects were found for perceptions scores not parallel to those reported…
Descriptors: Analysis of Covariance, Higher Education, Multiple Choice Tests, Scores
Bolden, Bernadine J.; Stoddard, Ann – 1980
This study examined the effect of two ways of question phrasing as related to three styles of expository writing on the test performance of elementary school children. Multiple-choice questions were developed for sets of passages which were written using three different syntactic structures; and which had different levels of difficulty. The…
Descriptors: Difficulty Level, Elementary Education, Kernel Sentences, Multiple Choice Tests
Curren, Randall R. – Theory and Research in Education, 2004
This article addresses the capacity of high stakes tests to measure the most significant kinds of learning. It begins by examining a set of philosophical arguments pertaining to construct validity and alleged conceptual obstacles to attributing specific knowledge and skills to learners. The arguments invoke philosophical doctrines of holism and…
Descriptors: Test Items, Educational Testing, Construct Validity, High Stakes Tests
Melancon, Janet G.; Thompson, Bruce – 1990
Latent trait measurement theory was used to investigate the measurement characteristics of both parts of a multiple-choice measure of field-independence, the Finding Embedded Figures Test (FEFT). Analysis was based on data provided by 1,528 students enrolled in one of two middle schools located in the southern United States. Of the subjects, 731…
Descriptors: Cognitive Processes, Comparative Testing, Field Dependence Independence, Item Response Theory
Velanoff, John – 1987
This report describes courseware for comprehensive computer-assisted testing and instruction. With this program, a personal computer can be used to: (1) generate multiple test versions to meet test objectives; (2) create study guides for self-directed learning; and (3) evaluate student and teacher performance. Numerous multiple-choice examples,…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Computer Uses in Education, Courseware
Ory, John C.; Ryan, Katherine E. – 1993
This book for college faculty provides a resource for developing, using, and grading classroom exams. The first chapter addresses ways to determine what content should be included on an exam. The second chapter identifies testing considerations such as number of exams, difficulty level of items, and test length. Chapters 3 and 4 provide guidelines…
Descriptors: Classroom Techniques, Codes of Ethics, Essay Tests, Evaluation Methods