NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ersan, Ozge; Berry, Yufeng – Educational Measurement: Issues and Practice, 2023
The increasing use of computerization in the testing industry and the need for items potentially measuring higher-order skills have led educational measurement communities to develop technology-enhanced (TE) items and conduct validity studies on the use of TE items. Parallel to this goal, the purpose of this study was to collect validity evidence…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Elementary Secondary Education, Accountability
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kosh, Audra E. – Journal of Applied Testing Technology, 2021
In recent years, Automatic Item Generation (AIG) has increasingly shifted from theoretical research to operational implementation, a shift raising some unforeseen practical challenges. Specifically, generating high-quality answer choices presents several challenges such as ensuring that answer choices blend in nicely together for all possible item…
Descriptors: Test Items, Multiple Choice Tests, Decision Making, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Foster, Colin; Woodhead, Simon; Barton, Craig; Clark-Wilson, Alison – Educational Studies in Mathematics, 2022
In this paper, we analyse a large, opportunistic dataset of responses (N = 219,826) to online, diagnostic multiple-choice mathematics questions, provided by 6-16-year-old UK school mathematics students (N = 7302). For each response, students were invited to indicate on a 5-point Likert-type scale how confident they were that their response was…
Descriptors: Foreign Countries, Elementary School Students, Secondary School Students, Multiple Choice Tests
Alaska Department of Education & Early Development, 2021
The Performance Evaluation for Alaska's Schools (PEAKS) assessment is administered annually statewide to students in grades 3 through 9 in ELA and mathematics. It provides students the opportunity to show their understanding of "Alaska's English Language Arts (ELA) and Mathematics Standards." The assessments provide information to…
Descriptors: Student Evaluation, Elementary School Students, Secondary School Students, Summative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sun, Bo; Zhu, Yunzong; Xiao, Yongkang; Xiao, Rong; Wei, Yungang – IEEE Transactions on Learning Technologies, 2019
In recent years, computerized adaptive testing (CAT) has gained popularity as an important means to evaluate students' ability. Assigning tags to test questions is crucial in CAT. Manual tagging is widely used for constructing question banks; however, this approach is time-consuming and might lead to consistency issues. Automatic question tagging,…
Descriptors: Computer Assisted Testing, Student Evaluation, Test Items, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wan, Lei; Henly, George A. – Applied Measurement in Education, 2012
Many innovative item formats have been proposed over the past decade, but little empirical research has been conducted on their measurement properties. This study examines the reliability, efficiency, and construct validity of two innovative item formats--the figural response (FR) and constructed response (CR) formats used in a K-12 computerized…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Measurement
Blazer, Christie – Research Services, Miami-Dade County Public Schools, 2010
This Information Capsule reviews research conducted on computer-based assessments. Advantages and disadvantages associated with computer-based testing programs are summarized and research on the comparability of computer-based and paper-and-pencil assessments is reviewed. Overall, studies suggest that for most students, there are few if any…
Descriptors: Comparative Analysis, Multiple Choice Tests, Computer Assisted Testing, Demography
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Jooyong – British Journal of Educational Technology, 2010
The newly developed computerized Constructive Multiple-choice Testing system is introduced. The system combines short answer (SA) and multiple-choice (MC) formats by asking examinees to respond to the same question twice, first in the SA format, and then in the MC format. This manipulation was employed to collect information about the two…
Descriptors: Grade 5, Evaluation Methods, Multiple Choice Tests, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Kingston, Neal M. – Applied Measurement in Education, 2009
There have been many studies of the comparability of computer-administered and paper-administered tests. Not surprisingly (given the variety of measurement and statistical sampling issues that can affect any one study) the results of such studies have not always been consistent. Moreover, the quality of computer-based test administration systems…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Printed Materials, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Tzu-Hua – Computers & Education, 2011
This research refers to the self-regulated learning strategies proposed by Pintrich (1999) in developing a multiple-choice Web-based assessment system, the Peer-Driven Assessment Module of the Web-based Assessment and Test Analysis system (PDA-WATA). The major purpose of PDA-WATA is to facilitate learner use of self-regulatory learning behaviors…
Descriptors: Learning Strategies, Student Motivation, Internet, Junior High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Tucker, Bill – Educational Leadership, 2009
New technology-enabled assessments offer the potential to understand more than just whether a student answered a test question right or wrong. Using multiple forms of media that enable both visual and graphical representations, these assessments present complex, multistep problems for students to solve and collect detailed information about an…
Descriptors: Research and Development, Problem Solving, Student Characteristics, Information Technology
Jamgochian, Elisa; Park, Bitnara Jasmine; Nese, Joseph F. T.; Lai, Cheng-Fei; Saez, Leilani; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2010
In this technical report, we provide reliability and validity evidence for the easyCBM[R] Reading measures for grade 2 (word and passage reading fluency and multiple choice reading comprehension). Evidence for reliability includes internal consistency and item invariance. Evidence for validity includes concurrent, predictive, and construct…
Descriptors: Grade 2, Reading Comprehension, Testing Programs, Reading Fluency
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Jooyong; Choi, Byung-Chul – British Journal of Educational Technology, 2008
A new computerised testing system was used at home to promote learning and also to save classroom instruction time. The testing system combined the features of short-answer and multiple-choice formats. The questions of the multiple-choice problems were presented without the options so that students had to generate answers for themselves; they…
Descriptors: Experimental Groups, Control Groups, Computer Assisted Testing, Instructional Effectiveness
Previous Page | Next Page ยป
Pages: 1  |  2