Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 11 |
Since 2016 (last 10 years) | 34 |
Since 2006 (last 20 years) | 44 |
Descriptor
Source
Author
Goldhammer, Frank | 3 |
Greiff, Samuel | 3 |
Yamamoto, Kentaro | 3 |
Borgonovi, Francesca | 2 |
Esther Ulitzsch | 2 |
Jerrim, John | 2 |
Sahin, Füsun | 2 |
Shin, Hyo Jeong | 2 |
Sälzer, Christine | 2 |
Wise, Steven L. | 2 |
Zehner, Fabian | 2 |
More ▼ |
Publication Type
Education Level
Audience
Administrators | 1 |
Teachers | 1 |
Location
Germany | 7 |
Australia | 3 |
Denmark | 3 |
Norway | 3 |
Sweden | 3 |
Africa | 2 |
Asia | 2 |
Finland | 2 |
France | 2 |
Ireland | 2 |
South America | 2 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 46 |
Trends in International… | 4 |
Program for the International… | 1 |
Progress in International… | 1 |
Raven Advanced Progressive… | 1 |
Teaching and Learning… | 1 |
What Works Clearinghouse Rating
Esther Ulitzsch; Janine Buchholz; Hyo Jeong Shin; Jonas Bertling; Oliver Lüdtke – Large-scale Assessments in Education, 2024
Common indicator-based approaches to identifying careless and insufficient effort responding (C/IER) in survey data scan response vectors or timing data for aberrances, such as patterns signaling straight lining, multivariate outliers, or signals that respondents rushed through the administered items. Each of these approaches is susceptible to…
Descriptors: Response Style (Tests), Attention, Achievement Tests, Foreign Countries
Esther Ulitzsch; Steffi Pohl; Lale Khorramdel; Ulf Kroehne; Matthias von Davier – Journal of Educational and Behavioral Statistics, 2024
Questionnaires are by far the most common tool for measuring noncognitive constructs in psychology and educational sciences. Response bias may pose an additional source of variation between respondents that threatens validity of conclusions drawn from questionnaire data. We present a mixture modeling approach that leverages response time data from…
Descriptors: Item Response Theory, Response Style (Tests), Questionnaires, Secondary School Students
Han, Areum; Krieger, Florian; Borgonovi, Francesca; Greiff, Samuel – Large-scale Assessments in Education, 2023
Process data are becoming more and more popular in education research. In the field of computer-based assessments of collaborative problem solving (ColPS), process data have been used to identify students' test-taking strategies while working on the assessment, and such data can be used to complement data collected on accuracy and overall…
Descriptors: Behavior Patterns, Cooperative Learning, Problem Solving, Reaction Time
Wise, Steven L.; Soland,, James; Bo, Yuanchao – International Journal of Testing, 2020
Disengaged test taking tends to be most prevalent with low-stakes tests. This has led to questions about the validity of aggregated scores from large-scale international assessments such as PISA and TIMSS, as previous research has found a meaningful correlation between the mean engagement and mean performance of countries. The current study, using…
Descriptors: Foreign Countries, International Assessment, Achievement Tests, Secondary School Students
Avvisati, Francesco; Borgonovi, Francesca – Large-scale Assessments in Education, 2023
Written instructions seldom need to be read when playing videogames. Instead, gaming often involves early information foraging and expansive exploration behaviors. We use data from the Programme for International Student Assessment (PISA) to explore whether students who regularly play videogames (daily gamers) adopt behaviors that are typical of…
Descriptors: Video Games, Science Process Skills, Problem Solving, Discovery Learning
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time
Baxter, Jacqueline – Management in Education, 2021
Dr. Tracey Burns is a Senior Analyst in the OECD's Centre for Educational Research and Innovation. She heads a portfolio of projects including Innovative Teaching for Effective Learning, 21st Century Children and Trends Shaping Education. Until recently she was also responsible for the OECD work on Governing Complex Education systems. Previous to…
Descriptors: COVID-19, Pandemics, Coping, Crisis Management
A Sequential Bayesian Changepoint Detection Procedure for Aberrant Behaviors in Computerized Testing
Jing Lu; Chun Wang; Jiwei Zhang; Xue Wang – Grantee Submission, 2023
Changepoints are abrupt variations in a sequence of data in statistical inference. In educational and psychological assessments, it is pivotal to properly differentiate examinees' aberrant behaviors from solution behavior to ensure test reliability and validity. In this paper, we propose a sequential Bayesian changepoint detection algorithm to…
Descriptors: Bayesian Statistics, Behavior Patterns, Computer Assisted Testing, Accuracy
Sahin, Füsun; Colvin, Kimberly F. – Large-scale Assessments in Education, 2020
The item responses of examinees who rapid-guess, who do not spend enough time reading and engaging with an item, will not reflect their true ability on that item. Rapid-disengagement refers to rapidly selecting a response to multiple-choice items (i.e., rapid-guess), omitting items, or providing short-unrelated answers to open-ended items in an…
Descriptors: Guessing (Tests), Item Response Theory, Reaction Time, Learner Engagement
Lundgren, Erik; Eklöf, Hanna – Educational Research and Evaluation, 2020
The present study used process data from a computer-based problem-solving task as indications of behavioural level of test-taking effort, and explored how behavioural item-level effort related to overall test performance and self-reported effort. Variables were extracted from raw process data and clustered. Four distinct clusters were obtained and…
Descriptors: Computer Assisted Testing, Problem Solving, Response Style (Tests), Test Items
Rivas, Axel; Scasso, Martín Guillermo – Journal of Education Policy, 2021
Since 2000, the PISA test implemented by OECD has become the prime benchmark for international comparisons in education. The 2015 PISA edition introduced methodological changes that altered the nature of its results. PISA made no longer valid non-reached items of the final part of the test, assuming that those unanswered questions were more a…
Descriptors: Test Validity, Computer Assisted Testing, Foreign Countries, Achievement Tests
Kroehne, Ulf; Buerger, Sarah; Hahnel, Carolin; Goldhammer, Frank – Educational Measurement: Issues and Practice, 2019
For many years, reading comprehension in the Programme for International Student Assessment (PISA) was measured via paper-based assessment (PBA). In the 2015 cycle, computer-based assessment (CBA) was introduced, raising the question of whether central equivalence criteria required for a valid interpretation of the results are fulfilled. As an…
Descriptors: Reading Comprehension, Computer Assisted Testing, Achievement Tests, Foreign Countries
Costa, Denise Reis; Chen, Chia-Wen – Large-scale Assessments in Education, 2023
Given the ongoing development of computer-based tasks, there has been increasing interest in modelling students' behaviour indicators from log file data with contextual variables collected via questionnaires. In this work, we apply a latent regression model to analyse the relationship between latent constructs (i.e., performance, speed, and…
Descriptors: Achievement Tests, Secondary School Students, International Assessment, Foreign Countries
Yamamoto, Kentaro; Lennon, Mary Louise – Quality Assurance in Education: An International Perspective, 2018
Purpose: Fabricated data jeopardize the reliability of large-scale population surveys and reduce the comparability of such efforts by destroying the linkage between data and measurement constructs. Such data result in the loss of comparability across participating countries and, in the case of cyclical surveys, between past and present surveys.…
Descriptors: Measurement, Deception, Data, Identification
Yamamoto, Kentaro; Shin, Hyo Jeong; Khorramdel, Lale – Educational Measurement: Issues and Practice, 2018
A multistage adaptive testing (MST) design was implemented for the Programme for the International Assessment of Adult Competencies (PIAAC) starting in 2012 for about 40 countries and has been implemented for the 2018 cycle of the Programme for International Student Assessment (PISA) for more than 80 countries. Using examples from PISA and PIAAC,…
Descriptors: International Assessment, Foreign Countries, Achievement Tests, Test Validity