NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…2
What Works Clearinghouse Rating
Showing 91 to 105 of 690 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McKenna, Peter – International Association for Development of the Information Society, 2018
Multiple Choice Questions come with the correct answer. Examinees have various reasons for selecting their answer, other than knowing it to be correct. Yet MCQs are common as summative assessments in the education of Computer Science and Information Systems students. To what extent can MCQs be answered correctly without knowing the answer; and can…
Descriptors: Multiple Choice Tests, Summative Evaluation, Student Evaluation, Evaluation Methods
Jing Lu; Chun Wang; Ningzhong Shi – Grantee Submission, 2023
In high-stakes, large-scale, standardized tests with certain time limits, examinees are likely to engage in either one of the three types of behavior (e.g., van der Linden & Guo, 2008; Wang & Xu, 2015): solution behavior, rapid guessing behavior, and cheating behavior. Oftentimes examinees do not always solve all items due to various…
Descriptors: High Stakes Tests, Standardized Tests, Guessing (Tests), Cheating
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Asquith, Steven – TESL-EJ, 2022
Although an accurate measure of vocabulary size is integral to understanding the proficiency of language learners, the validity of multiple-choice (M/C) vocabulary tests to determine this has been questioned due to users guessing correct answers which inflates scores. In this paper the nature of guessing and partial knowledge used when taking the…
Descriptors: Guessing (Tests), English (Second Language), Second Language Learning, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Stewart, Jeffrey; McLean, Stuart; Kramer, Brandon – Language Assessment Quarterly, 2017
Stewart questioned vocabulary size estimation methods proposed by Beglar and Nation for the Vocabulary Size Test, further arguing Rasch mean square (MSQ) fit statistics cannot determine the proportion of random guesses contained in the average learner's raw score, because the average value will be near 1 by design. He illustrated this by…
Descriptors: Guessing (Tests), Item Response Theory, Language Tests, Vocabulary
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James; Kuhfeld, Megan – Educational Assessment, 2019
Considerable research has examined the use of rapid guessing measures to identify disengaged item responses. However, little is known about students who rapidly guess over the course of several tests. In this study, we use achievement test data from six administrations over three years to investigate whether rapid guessing is a stable trait-like…
Descriptors: Testing, Guessing (Tests), Reaction Time, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
McKenna, Peter – Interactive Technology and Smart Education, 2019
Purpose: This paper aims to examine whether multiple choice questions (MCQs) can be answered correctly without knowing the answer and whether constructed response questions (CRQs) offer more reliable assessment. Design/methodology/approach: The paper presents a critical review of existing research on MCQs, then reports on an experimental study…
Descriptors: Multiple Choice Tests, Accuracy, Test Wiseness, Objective Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Andrich, David; Marais, Ida – Journal of Educational Measurement, 2018
Even though guessing biases difficulty estimates as a function of item difficulty in the dichotomous Rasch model, assessment programs with tests which include multiple-choice items often construct scales using this model. Research has shown that when all items are multiple-choice, this bias can largely be eliminated. However, many assessments have…
Descriptors: Multiple Choice Tests, Test Items, Guessing (Tests), Test Bias
NWEA, 2017
This document describes the following two new student engagement metrics now included on NWEA™ MAP® Growth™ reports, and provides guidance on how to interpret and use these metrics: (1) Percent of Disengaged Responses; and (2) Estimated Impact of Disengagement on RIT. These metrics will inform educators about what percentage of items from a…
Descriptors: Achievement Tests, Achievement Gains, Test Interpretation, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Brassil, Chad E.; Couch, Brian A. – International Journal of STEM Education, 2019
Background: Within undergraduate science courses, instructors often assess student thinking using closed-ended question formats, such as multiple-choice (MC) and multiple-true-false (MTF), where students provide answers with respect to predetermined response options. While MC and MTF questions both consist of a question stem followed by a series…
Descriptors: Multiple Choice Tests, Objective Tests, Student Evaluation, Thinking Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom; Crisp, Victoria – Assessment in Education: Principles, Policy & Practice, 2019
For many years, question choice has been used in some UK public examinations, with students free to choose which questions they answer from a selection (within certain parameters). There has been little published research on choice of exam questions in recent years in the UK. In this article we distinguish different scenarios in which choice…
Descriptors: Test Items, Test Construction, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Xian; Liu, Jianda; Ai, Haiyang – Language Testing, 2020
The main purpose of this study is to investigate guessing in the Yes/No (YN) format vocabulary test. One-hundred-and-five university students took a YN test, a translation task and a multiple-choice vocabulary size test (MC VST). With matched lexical properties between the real words and the pseudowords, pseudowords could index guessing in the YN…
Descriptors: Vocabulary Development, Language Tests, Test Format, College Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Otoyo, Lucia; Bush, Martin – Practical Assessment, Research & Evaluation, 2018
This article presents the results of an empirical study of "subset selection" tests, which are a generalisation of traditional multiple-choice tests in which test takers are able to express partial knowledge. Similar previous studies have mostly been supportive of subset selection, but the deduction of marks for incorrect responses has…
Descriptors: Multiple Choice Tests, Grading, Test Reliability, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James – Teachers College Record, 2018
Background/Context: Achievement gaps motivate a range of practices and policies aimed at closing those gaps. Most gaps studies assume that differences in observed test scores across subgroups are measuring differences in content mastery. For such an assumption to hold, students in the subgroups being compared need to be giving similar effort on…
Descriptors: Achievement Gap, Evaluation Methods, Student Characteristics, Racial Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Whitcomb, Kyle M.; Guthrie, Matthew W.; Singh, Chandralekha; Chen, Zhongzhou – Physical Review Physics Education Research, 2021
In two earlier studies, we developed a new method to measure students' ability to transfer physics problem-solving skills to new contexts using a sequence of online learning modules, and implemented two interventions in the form of additional learning modules designed to improve transfer ability. The current paper introduces a new data analysis…
Descriptors: Accuracy, Measurement Techniques, Electronic Learning, Learning Modules
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chu, Wei; Pavlik, Philip I., Jr. – International Educational Data Mining Society, 2023
In adaptive learning systems, various models are employed to obtain the optimal learning schedule and review for a specific learner. Models of learning are used to estimate the learner's current recall probability by incorporating features or predictors proposed by psychological theory or empirically relevant to learners' performance. Logistic…
Descriptors: Reaction Time, Accuracy, Models, Predictor Variables
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  46