NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 106 to 120 of 1,398 results Save | Export
OECD Publishing, 2019
Computer-based administration of large-scale assessments makes it possible to collect a rich set of information on test takers, through analysis of the log files recording interactions between the computer interface and the server. This report examines timing and engagement indicators from the Survey of Adult Skills, a product of the Programme for…
Descriptors: Adults, Surveys, International Assessment, Responses
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Babcock, Ben; Siegel, Zachary D. – Practical Assessment, Research & Evaluation, 2022
Research about repeated testing has revealed that retaking the same exam form generally does not advantage or disadvantage failing candidates in selected response-style credentialing exams. Feinberg, Raymond, and Haist (2015) found a contributing factor to this phenomenon: people answering items incorrectly on both attempts give the same incorrect…
Descriptors: Multiple Choice Tests, Item Analysis, Test Items, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Magraw-Mickelson, Zoe; Wang, Harry H.; Gollwitzer, Mario – International Journal of Testing, 2022
Much psychological research depends on participants' diligence in filling out materials such as surveys. However, not all participants are motivated to respond attentively, which leads to unintended issues with data quality, known as careless responding. Our question is: how do different modes of data collection--paper/pencil, computer/web-based,…
Descriptors: Response Style (Tests), Surveys, Data Collection, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, James J. – Measurement: Interdisciplinary Research and Perspectives, 2022
With the use of computerized testing, ordinary assessments can capture both answer accuracy and answer response time. For the Canadian Programme for the International Assessment of Adult Competencies (PIAAC) numeracy and literacy subtests, person ability, person speed, question difficulty, question time intensity, fluency (rate), person fluency…
Descriptors: Foreign Countries, Adults, Computer Assisted Testing, Network Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Guo, Hongwen – Applied Measurement in Education, 2020
The objective of this study was to evaluate whether differential noneffortful responding (identified via response latencies) was present in four countries administered a low-stakes college-level critical thinking assessment. Results indicated significant differences (as large as 0.90 "SD") between nearly all country pairings in the…
Descriptors: Response Style (Tests), Cultural Differences, Critical Thinking, Cognitive Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ivanova, Militsa; Michaelides, Michalis; Eklöf, Hanna – Educational Research and Evaluation, 2020
Collecting process data in computer-based assessments provides opportunities to describe examinee behaviour during a test-taking session. The number of actions taken by students while interacting with an item is in this context a variable that has been gaining attention. The present study aims to investigate how the number of actions performed on…
Descriptors: Foreign Countries, Secondary School Students, Achievement Tests, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Abbakumov, Dmitry; Desmet, Piet; Van den Noortgate, Wim – Applied Measurement in Education, 2020
Formative assessments are an important component of massive open online courses (MOOCs), online courses with open access and unlimited student participation. Accurate conclusions on students' proficiency via formative, however, face several challenges: (a) students are typically allowed to make several attempts; and (b) student performance might…
Descriptors: Item Response Theory, Formative Evaluation, Online Courses, Response Style (Tests)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zehner, Fabian; Harrison, Scott; Eichmann, Beate; Deribo, Tobias; Bengs, Daniel; Andersen, Nico; Hahnel, Carolin – International Educational Data Mining Society, 2020
The "2nd Annual WPI-UMASS-UPENN EDM Data Mining Challenge" required contestants to predict efficient testtaking based on log data. In this paper, we describe our theory-driven and psychometric modeling approach. For feature engineering, we employed the Log-Normal Response Time Model for estimating latent person speed, and the Generalized…
Descriptors: Data Analysis, Competition, Classification, Prediction
Scott, Marcus W. – ProQuest LLC, 2018
One way that examinees can gain an unfair advantage on a test is by having prior access to the test questions and their answers, known as preknowledge. Determining which examinees had preknowledge can be a difficult task. Sometimes, the compromised test content that examinees use to get preknowledge has mistakes in the answer key. Examinees who…
Descriptors: Cheating, Answer Keys, Tests, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Steinmann, Isa; Sánchez, Daniel; van Laar, Saskia; Braeken, Johan – Assessment in Education: Principles, Policy & Practice, 2022
Questionnaire scales that are mixed-worded, i.e. include both positively and negatively worded items, often suffer from issues like low reliability and more complex latent structures than intended. Part of the problem might be that some responders fail to respond consistently to the mixed-worded items. We investigated the prevalence and impact of…
Descriptors: Response Style (Tests), Test Items, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Minjeong; Wu, Amery D. – Educational and Psychological Measurement, 2019
Item response tree (IRTree) models are recently introduced as an approach to modeling response data from Likert-type rating scales. IRTree models are particularly useful to capture a variety of individuals' behaviors involving in item responding. This study employed IRTree models to investigate response styles, which are individuals' tendencies to…
Descriptors: Item Response Theory, Models, Likert Scales, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James; Wise, Steven L.; Gao, Lingyun – Applied Measurement in Education, 2019
Disengaged responding is a phenomenon that often biases observed scores from achievement tests and surveys in practically and statistically significant ways. This problem has led to the development of methods to detect and correct for disengaged responses on both achievement test and survey scores. One major disadvantage when trying to detect…
Descriptors: Reaction Time, Metadata, Response Style (Tests), Student Surveys
OECD Publishing, 2019
Log files from computer-based assessment can help better understand respondents' behaviours and cognitive strategies. Analysis of timing information from Programme for the International Assessment of Adult Competencies (PIAAC) reveals large differences in the time participants take to answer assessment items, as well as large country differences…
Descriptors: Adults, Computer Assisted Testing, Test Items, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Adrian Adams; Lauren Barth-Cohen – CBE - Life Sciences Education, 2024
In undergraduate research settings, students are likely to encounter anomalous data, that is, data that do not meet their expectations. Most of the research that directly or indirectly captures the role of anomalous data in research settings uses post-hoc reflective interviews or surveys. These data collection approaches focus on recall of past…
Descriptors: Undergraduate Students, Physics, Science Instruction, Laboratory Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James; Kuhfeld, Megan; Rios, Joseph – Large-scale Assessments in Education, 2021
Low examinee effort is a major threat to valid uses of many test scores. Fortunately, several methods have been developed to detect noneffortful item responses, most of which use response times. To accurately identify noneffortful responses, one must set response time thresholds separating those responses from effortful ones. While other studies…
Descriptors: Reaction Time, Measurement, Response Style (Tests), Reading Tests
Pages: 1  |  ...  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  ...  |  94