Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 9 |
Descriptor
| Educational Testing | 28 |
| Test Format | 28 |
| Test Items | 28 |
| Test Construction | 17 |
| Multiple Choice Tests | 10 |
| Student Evaluation | 10 |
| Higher Education | 8 |
| Computer Assisted Testing | 7 |
| Test Bias | 7 |
| Test Content | 7 |
| Educational Assessment | 6 |
| More ▼ | |
Source
Author
| Allen, Nancy | 1 |
| Arneson, Amy | 1 |
| Bennett, Randy Elliott | 1 |
| Calder, Peter W. | 1 |
| Check, John F. | 1 |
| Curren, Randall R. | 1 |
| Drake, Samuel | 1 |
| Erickson, Harley E. | 1 |
| Frey, Andreas | 1 |
| Gifford, Bernard | 1 |
| Green, Sylvia | 1 |
| More ▼ | |
Publication Type
Education Level
| Elementary Secondary Education | 8 |
| Higher Education | 4 |
| Postsecondary Education | 4 |
| Elementary Education | 3 |
| Secondary Education | 2 |
| Grade 8 | 1 |
| High Schools | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
Audience
| Researchers | 3 |
| Administrators | 1 |
| Practitioners | 1 |
| Teachers | 1 |
Location
| Canada | 1 |
| Nebraska | 1 |
| United Kingdom | 1 |
| United States | 1 |
Laws, Policies, & Programs
| Individuals with Disabilities… | 1 |
| No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
| Advanced Placement… | 1 |
| Graduate Record Examinations | 1 |
| National Assessment of… | 1 |
| SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Ozge Ersan Cinar – ProQuest LLC, 2022
In educational tests, a group of questions related to a shared stimulus is called a testlet (e.g., a reading passage with multiple related questions). Use of testlets is very common in educational tests. Additionally, computerized adaptive testing (CAT) is a mode of testing where the test forms are created in real time tailoring to the test…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Educational Testing
Nebraska Department of Education, 2024
The Nebraska Student-Centered Assessment System (NSCAS) is a statewide assessment system that embodies Nebraska's holistic view of students and helps them prepare for success in postsecondary education, career, and civic life. It uses multiple measures throughout the year to provide educators and decision-makers at all levels with the insights…
Descriptors: Student Evaluation, Evaluation Methods, Elementary School Students, Middle School Students
Arneson, Amy – ProQuest LLC, 2019
This three-paper dissertation explores item cluster-based assessments, first in general as it relates to modeling, and then, specific issues surrounding a particular item cluster-based assessment designed. There should be a reasonable analogy between the structure of a psychometric model and the cognitive theory that the assessment is based upon.…
Descriptors: Item Response Theory, Test Items, Critical Thinking, Cognitive Tests
Tian, Feng – ProQuest LLC, 2011
There has been a steady increase in the use of mixed-format tests, that is, tests consisting of both multiple-choice items and constructed-response items in both classroom and large-scale assessments. This calls for appropriate equating methods for such tests. As Item Response Theory (IRT) has rapidly become mainstream as the theoretical basis for…
Descriptors: Item Response Theory, Comparative Analysis, Equated Scores, Statistical Analysis
Kim, Sooyeon; Walker, Michael E.; McHale, Frederick – Journal of Educational Measurement, 2010
In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…
Descriptors: Measures (Individuals), Scoring, Equated Scores, Test Bias
Hagge, Sarah Lynn – ProQuest LLC, 2010
Mixed-format tests containing both multiple-choice and constructed-response items are widely used on educational tests. Such tests combine the broad content coverage and efficient scoring of multiple-choice items with the assessment of higher-order thinking skills thought to be provided by constructed-response items. However, the combination of…
Descriptors: Test Format, True Scores, Equated Scores, Psychometrics
Frey, Andreas; Hartig, Johannes; Rupp, Andre A. – Educational Measurement: Issues and Practice, 2009
In most large-scale assessments of student achievement, several broad content domains are tested. Because more items are needed to cover the content domains than can be presented in the limited testing time to each individual student, multiple test forms or booklets are utilized to distribute the items to the students. The construction of an…
Descriptors: Measures (Individuals), Test Construction, Theory Practice Relationship, Design
Huang, Yi-Min; Trevisan, Mike; Storfer, Andrew – International Journal for the Scholarship of Teaching and Learning, 2007
Despite the prevalence of multiple choice items in educational testing, there is a dearth of empirical evidence for multiple choice item writing rules. The purpose of this study was to expand the base of empirical evidence by examining the use of the "all-of-the-above" option in a multiple choice examination in order to assess how…
Descriptors: Multiple Choice Tests, Educational Testing, Ability Grouping, Test Format
Peer reviewedWeiten, Wayne – Journal of Experimental Education, 1982
A comparison of double as opposed to single multiple-choice questions yielded significant differences in regard to item difficulty, item discrimination, and internal reliability, but not concurrent validity. (Author/PN)
Descriptors: Difficulty Level, Educational Testing, Higher Education, Multiple Choice Tests
Peer reviewedMentzer, Thomas L. – Educational and Psychological Measurement, 1982
Evidence of biases in the correct answers in multiple-choice test item files were found to include "all of the above" bias in which that answer was correct more than 25 percent of the time, and a bias that the longest answer was correct too frequently. Seven bias types were studied. (Author/CM)
Descriptors: Educational Testing, Higher Education, Multiple Choice Tests, Psychology
Sarvela, Paul D.; Noonan, John V. – 1987
Although there are many advantages to using computer-based tests (CBTs) linked to computer-based instruction (CBI), there are also several difficulties. In certain instructional settings, it is difficult to conduct psychometric analyses of test results. Several measurement issues surface when CBT programs are linked to CBI. Testing guidelines…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Educational Testing, Equated Scores
Marshall, Sandra P. – 1986
Open-ended individually administered questions can ascertain whether students can reason about arithmetic problems. Free response test items are useful in assessing students' thought processes as they solve mathematics story problems. Since story problems do not state explicitly which arithmetic operations are required for solution, students must…
Descriptors: Arithmetic, Cognitive Measurement, Educational Assessment, Educational Testing
Milton, Ohmer – 1982
Educators are called upon to improve the quality of classroom tests to enhance the learning of content. Less faculty concern for tests than for other features of instruction, compounded by a lack of knowing how to assess different levels of learning with test questions that measure complex processes, appear to generate poor quality classroom…
Descriptors: Educational Testing, Evaluation Methods, Higher Education, Learning Activities
Wainer, Howard; Thissen, David – 1994
When an examination consists in whole or part of constructed response test items, it is common practice to allow the examinee to choose a subset of the constructed response questions from a larger pool. It is sometimes argued that, if choice were not allowed, the limitations on domain coverage forced by the small number of items might unfairly…
Descriptors: Constructed Response, Difficulty Level, Educational Testing, Equated Scores
Terwilliger, James S. – 1991
This paper clarifies important distinctions in item writing and item scoring and considers the implications of these distinctions for developing guidelines related to test construction for training teachers. The terminology used to describe and classify paper and pencil test questions frequently confuses two distinct features of questions:…
Descriptors: Classroom Techniques, Educational Testing, Higher Education, Measurement Techniques
Previous Page | Next Page ยป
Pages: 1 | 2
Direct link
