NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
Embedded Figures Test1
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Chin, Huan; Chew, Cheng Meng; Lim, Hooi Lian; Thien, Lei Mee – International Journal of Science and Mathematics Education, 2022
Cognitive Diagnostic Assessment (CDA) is an alternative assessment which can give a clear picture of pupils' learning process and cognitive structures to education stakeholders so that appropriate instructional strategies can be designed to tailored pupils' needs. Coincide with this function, the Ordered Multiple-Choice (OMC) items were…
Descriptors: Mathematics Instruction, Mathematics Tests, Multiple Choice Tests, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Relkin, Emily; de Ruiter, Laura; Bers, Marina Umaschi – Journal of Science Education and Technology, 2020
There is a need for developmentally appropriate Computational Thinking (CT) assessments that can be implemented in early childhood classrooms. We developed a new instrument called "TechCheck" for assessing CT skills in young children that does not require prior knowledge of computer programming. "TechCheck" is based on…
Descriptors: Developmentally Appropriate Practices, Computation, Thinking Skills, Early Childhood Education
Peer reviewed Peer reviewed
Direct linkDirect link
Williamson, Kathryn E.; Willoughby, Shannon; Prather, Edward E. – Astronomy Education Review, 2013
We introduce the Newtonian Gravity Concept Inventory (NGCI), a 26-item multiple-choice instrument to assess introductory general education college astronomy ("Astro 101") student understanding of Newtonian gravity. This paper describes the development of the NGCI through four phases: Planning, Construction, Quantitative Analysis, and…
Descriptors: Science Instruction, Scientific Concepts, Astronomy, College Science
Peer reviewed Peer reviewed
Direct linkDirect link
Keller, Christopher M.; Kros, John F. – Marketing Education Review, 2011
Measures of survey reliability are commonly addressed in marketing courses. One statistic of reliability is "Cronbach's alpha." This paper presents an application of survey reliability as a reflexive application of multiple-choice exam validation. The application provides an interactive decision support system that incorporates survey item…
Descriptors: Test Validity, Marketing, Test Reliability, Multiple Choice Tests
Herman, Geoffrey Lindsay – ProQuest LLC, 2011
Instructors in electrical and computer engineering and in computer science have developed innovative methods to teach digital logic circuits. These methods attempt to increase student learning, satisfaction, and retention. Although there are readily accessible and accepted means for measuring satisfaction and retention, there are no widely…
Descriptors: Grounded Theory, Delphi Technique, Concept Formation, Misconceptions
Duncan, R. Eric – Measurement and Evaluation in Guidance, 1983
Reanalyzes data provided by Swanson (1976) and Straton and Catts (1980) to test claims of superiority for the three-alternative multiple-choice item test and to present possible oversights made by these researchers. Results suggest it is doubtful that three-alternative test items are better than four-alternative items. (PAS)
Descriptors: Achievement Tests, Adults, Guidance Personnel, Multiple Choice Tests
Peer reviewed Peer reviewed
Fagley, N. S. – Journal of Educational Psychology, 1987
This article investigates positional response bias, testwiseness, and guessing strategy as components of variance in test responses on multiple-choice tests. University students responded to two content exams, a testwiseness measure, and a guessing strategy measure. The proportion of variance in test scores accounted for by positional response…
Descriptors: Achievement Tests, Guessing (Tests), Higher Education, Multiple Choice Tests
Mislevy, Robert J. – 1991
This paper lays out a framework for comparing the qualities and the quantities of information about student competence provided by multiple-choice and free-response test items. After discussing the origins of multiple-choice testing and recent influences for change, the paper outlines an "inference network" approach to test theory, in…
Descriptors: Cognitive Psychology, Competence, Elementary Secondary Education, Inferences
Norris, Stephen P. – 1988
The problems of validity and fairness involved in multiple-choice critical thinking tests can be lessened by using verbal reports of examinees' thinking during the process of developing such tests in order to retain only those items which rely on critical thinking skills to obtain the correct answer. Multiple-choice testing can lead to unfair…
Descriptors: Critical Thinking, High School Students, High Schools, Multiple Choice Tests
Peer reviewed Peer reviewed
Jaradat, Derar; Sawaged, Sari – Journal of Educational Measurement, 1986
The impact of the Subset Selection Technique (SST) for multiple-choice items on certain properties of a test was compared with that of two other methods, the Number Right and the Correction for Guessing Formula. Results indicated that SST outperformed the other two, producing higher reliability and validity without favoring high risk takers.…
Descriptors: Foreign Countries, Grade 9, Guessing (Tests), Measurement Techniques
Peer reviewed Peer reviewed
Wheeler, Patricia H. – Evaluation Practice, 1995
This volume is the fourth in a series for college faculty and advanced graduate students, "Survival Skills for Scholars." It offers practical advice for developing, using, and grading classroom examinations, focusing on traditional multiple-choice and constructed-response tests rather than alternative assessments. (SLD)
Descriptors: College Faculty, Constructed Response, Grading, Higher Education
Myers, Charles T. – 1978
The viewpoint is expressed that adding to test reliability by either selecting a more homogeneous set of items, restricting the range of item difficulty as closely as possible to the most efficient level, or increasing the number of items will not add to test validity and that there is considerable danger that efforts to increase reliability may…
Descriptors: Achievement Tests, Item Analysis, Multiple Choice Tests, Test Construction
Torrence, David R. – 1986
This was a replicative study that was initiated with a journeyman level certification instrument for an international union, when industry monitors were observed suggesting to examinees to "go with your first response." The question arose whether this was a researched-based practice. If not, wouldn't this practice inject constant error…
Descriptors: Adults, Correlation, Error of Measurement, Guessing (Tests)
Norris, Stephen P. – 1988
A study examined whether the process of gathering verbal reports of subjects' thinking while taking multiple-choice critical thinking tests could be used to infer the reasoning process used and identify test items which do not require critical thinking skills. Four factors can render an inference of a subject's critical thinking skills…
Descriptors: Cognitive Processes, Critical Thinking, High School Students, High Schools
Peer reviewed Peer reviewed
Phillips, S. E. – West's Education Law Reporter, 1990
Describes the "Golden Rule" test construction technique and its legal history. Focuses on the legal/measurement issues and considers alternative procedures for constructing racially "unbiased" tests. Concludes with an analysis of the probable reaction of the present Supreme Court to a constitutional/statutory challenge of the…
Descriptors: Certification, Court Litigation, Equal Opportunities (Jobs), Insurance Companies
Previous Page | Next Page ยป
Pages: 1  |  2