NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 85 results Save | Export
Ozge Ersan Cinar – ProQuest LLC, 2022
In educational tests, a group of questions related to a shared stimulus is called a testlet (e.g., a reading passage with multiple related questions). Use of testlets is very common in educational tests. Additionally, computerized adaptive testing (CAT) is a mode of testing where the test forms are created in real time tailoring to the test…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Educational Testing
Nebraska Department of Education, 2024
The Nebraska Student-Centered Assessment System (NSCAS) is a statewide assessment system that embodies Nebraska's holistic view of students and helps them prepare for success in postsecondary education, career, and civic life. It uses multiple measures throughout the year to provide educators and decision-makers at all levels with the insights…
Descriptors: Student Evaluation, Evaluation Methods, Elementary School Students, Middle School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Davis-Berg, Elizabeth C.; Minbiole, Julie – School Science Review, 2020
The completion rates were compared for long-form questions where a large blank answer space is provided and for long-form questions where the answer space has bullet-points prompts corresponding to the parts of the question. It was found that students were more likely to complete a question when bullet points were provided in the answer space.…
Descriptors: Test Format, Test Construction, Academic Achievement, Educational Testing
Arneson, Amy – ProQuest LLC, 2019
This three-paper dissertation explores item cluster-based assessments, first in general as it relates to modeling, and then, specific issues surrounding a particular item cluster-based assessment designed. There should be a reasonable analogy between the structure of a psychometric model and the cognitive theory that the assessment is based upon.…
Descriptors: Item Response Theory, Test Items, Critical Thinking, Cognitive Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Walker, Michael E.; McHale, Frederick – Journal of Educational Measurement, 2010
In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…
Descriptors: Measures (Individuals), Scoring, Equated Scores, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2010
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final…
Descriptors: Computer Assisted Instruction, Scoring, Grading, Comparative Analysis
Tian, Feng – ProQuest LLC, 2011
There has been a steady increase in the use of mixed-format tests, that is, tests consisting of both multiple-choice items and constructed-response items in both classroom and large-scale assessments. This calls for appropriate equating methods for such tests. As Item Response Theory (IRT) has rapidly become mainstream as the theoretical basis for…
Descriptors: Item Response Theory, Comparative Analysis, Equated Scores, Statistical Analysis
Rhode Island Department of Elementary and Secondary Education, 2007
This handbook will assist principals and school testing coordinators in implementing the spring 2007 administration of the Developmental Reading Assessment (DRA). Information regarding administration timeline, reporting, process, online tools and contact personnel is discussed. Contents include: (1) Scheduling; (2) Identify Primary Test…
Descriptors: Testing Accommodations, Alternative Assessment, Educational Testing, Guidance Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Wiliam, Dylan – Review of Research in Education, 2010
The idea that validity should be considered a property of inferences, rather than of assessments, has developed slowly over the past century. In early writings about the validity of educational assessments, validity was defined as a property of an assessment. The most common definition was that an assessment was valid to the extent that it…
Descriptors: Educational Assessment, Validity, Inferences, Construct Validity
Joseph, Dane Christian – ProQuest LLC, 2010
Multiple-choice item-writing guideline research is in its infancy. Haladyna (2004) calls for a science of item-writing guideline research. The purpose of this study is to respond to such a call. The purpose of this study was to examine the impact of student ability and method for varying the location of correct answers in classroom multiple-choice…
Descriptors: Evidence, Test Format, Guessing (Tests), Program Effectiveness
Thompson, Sandra; Thurlow, Martha; Moore, Michael – 2003
This report presents factors to consider in the design of computer-based testing for all students, including students with disabilities and students with limited English proficiency. It also provides a process for the initial transformation of paper/pencil assessments to inclusive computer-based testing. Steps include: (1) assemble a group of…
Descriptors: Computer Assisted Testing, Disabilities, Educational Testing, Elementary Secondary Education
Varnhagen, Stanley; Calder, Peter W. – 1985
The Microcomputer Diagnostic Testing Project (MDTP) was designed to broaden the use of microcomputers in student testing, for elementary and secondary school use, and for a variety of test item formats. MDTP was designed to administer tests, to allow teachers to create or revise tests, to print out test forms when desired, to score tests, to…
Descriptors: Computer Assisted Testing, Computer Software, Educational Testing, Elementary Secondary Education
Hagge, Sarah Lynn – ProQuest LLC, 2010
Mixed-format tests containing both multiple-choice and constructed-response items are widely used on educational tests. Such tests combine the broad content coverage and efficient scoring of multiple-choice items with the assessment of higher-order thinking skills thought to be provided by constructed-response items. However, the combination of…
Descriptors: Test Format, True Scores, Equated Scores, Psychometrics
Peer reviewed Peer reviewed
Deville, Craig; O'Neill, Thomas; Wright, Benjamin D.; Woodcock, Richard W.; Munoz-Sandoval, Ana; Gershon, Richard C.; Bergstrom, Betty – Popular Measurement, 1998
Articles in this special section consider (1) flow in test taking (Craig Deville); (2) testwiseness (Thomas O'Neill); (3) test length (Benjamin Wright); (4) cross-language test equating (Richard W. Woodcock and Ana Munoz-Sandoval); (5) computer-assisted testing and testwiseness (Richard Gershon and Betty Bergstrom); and (6) Web-enhanced testing…
Descriptors: Computer Assisted Testing, Educational Testing, Equated Scores, Measurement Techniques
Sarvela, Paul D.; Noonan, John V. – 1987
Although there are many advantages to using computer-based tests (CBTs) linked to computer-based instruction (CBI), there are also several difficulties. In certain instructional settings, it is difficult to conduct psychometric analyses of test results. Several measurement issues surface when CBT programs are linked to CBI. Testing guidelines…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Educational Testing, Equated Scores
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6