NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Olsho, Alexis; Smith, Trevor I.; Eaton, Philip; Zimmerman, Charlotte; Boudreaux, Andrew; White Brahmia, Suzanne – Physical Review Physics Education Research, 2023
We developed the Physics Inventory of Quantitative Literacy (PIQL) to assess students' quantitative reasoning in introductory physics contexts. The PIQL includes several "multiple-choice-multipleresponse" (MCMR) items (i.e., multiple-choice questions for which more than one response may be selected) as well as traditional single-response…
Descriptors: Multiple Choice Tests, Science Tests, Physics, Measures (Individuals)
National Academies Press, 2022
The National Assessment of Educational Progress (NAEP) -- often called "The Nation's Report Card" -- is the largest nationally representative and continuing assessment of what students in public and private schools in the United States know and can do in various subjects and has provided policy makers and the public with invaluable…
Descriptors: Costs, Futures (of Society), National Competency Tests, Educational Trends
Peer reviewed Peer reviewed
Direct linkDirect link
DiBattista, David; Sinnige-Egger, Jo-Anne; Fortuna, Glenda – Journal of Experimental Education, 2014
The authors assessed the effects of using "none of the above" as an option in a 40-item, general-knowledge multiple-choice test administered to undergraduate students. Examinees who selected "none of the above" were given an incentive to write the correct answer to the question posed. Using "none of the above" as the…
Descriptors: Multiple Choice Tests, Testing, Undergraduate Students, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Taherbhai, Husein; Seo, Daeryong; Bowman, Trinell – British Educational Research Journal, 2012
Literature in the United States provides many examples of no difference in student achievement when measured against the mode of test administration i.e., paper-pencil and online versions of the test. However, most of these researches centre on "regular" students who do not require differential teaching methods or different evaluation…
Descriptors: Learning Disabilities, Statistical Analysis, Teaching Methods, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Jungtae; Craig, Daniel A. – Computer Assisted Language Learning, 2012
Videoconferencing offers new opportunities for language testers to assess speaking ability in low-stakes diagnostic tests. To be considered a trusted testing tool in language testing, a test should be examined employing appropriate validation processes [Chapelle, C.A., Jamieson, J., & Hegelheimer, V. (2003). "Validation of a web-based ESL…
Descriptors: Speech Communication, Testing, Language Tests, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Judd, Wallace – Practical Assessment, Research & Evaluation, 2009
Over the past twenty years in performance testing a specific item type with distinguishing characteristics has arisen time and time again. It's been invented independently by dozens of test development teams. And yet this item type is not recognized in the research literature. This article is an invitation to investigate the item type, evaluate…
Descriptors: Test Items, Test Format, Evaluation, Item Analysis
New York State Education Department, 2014
This technical report provides an overview of the New York State Alternate Assessment (NYSAA), including a description of the purpose of the NYSAA, the processes utilized to develop and implement the NYSAA program, and Stakeholder involvement in those processes. The purpose of this report is to document the technical aspects of the 2013-14 NYSAA.…
Descriptors: Alternative Assessment, Educational Assessment, State Departments of Education, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua; Schwarz, Richard D. – Applied Psychological Measurement, 2006
Multidimensional item response theory (IRT) models have been proposed for better understanding the dimensional structure of data or to define diagnostic profiles of student learning. A compensatory multidimensional two-parameter partial credit model (M-2PPC) for constructed-response items is presented that is a generalization of those proposed to…
Descriptors: Models, Item Response Theory, Markov Processes, Monte Carlo Methods
Wood, Robert – Evaluation in Education: International Progress, 1977
The author surveys literature and practice, primarily in Great Britain and the United States, about multiple-choice testing, comments on criticisms, and defends the state of the art. Varous item types, item writing, test instructions and scoring formulas, item analysis, and test construction are discussed. An extensive bibliography is appended.…
Descriptors: Achievement Tests, Item Analysis, Multiple Choice Tests, Scoring Formulas
Jacobs, Lucy Cheser; Chase, Clinton I. – 1992
This book offers specific how-to advice to college faculty on every stage of the testing process, including planning the test and classifying objectives to be measured, ensuring the validity and reliability of the test, and grading in such a way as to arrive at fair grades based on relevant data. The book examines the strengths and weaknesses of…
Descriptors: Cheating, College Faculty, Comparative Analysis, Computer Assisted Testing
Schrader, William B. – 1984
This report provides information on test development, test administration, and score interpretation for the Graduate Management Admission Test (GMAT). The GMAT, first administered in 1954, provides objective measures of an applicant's abilities for use in admissions decisions by graduate management schools. It is currently composed of five…
Descriptors: Administrator Education, Business Administration, College Entrance Examinations, Graduate Study