NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 16 to 30 of 50 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kroehne, Ulf; Buerger, Sarah; Hahnel, Carolin; Goldhammer, Frank – Educational Measurement: Issues and Practice, 2019
For many years, reading comprehension in the Programme for International Student Assessment (PISA) was measured via paper-based assessment (PBA). In the 2015 cycle, computer-based assessment (CBA) was introduced, raising the question of whether central equivalence criteria required for a valid interpretation of the results are fulfilled. As an…
Descriptors: Reading Comprehension, Computer Assisted Testing, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Costa, Denise Reis; Chen, Chia-Wen – Large-scale Assessments in Education, 2023
Given the ongoing development of computer-based tasks, there has been increasing interest in modelling students' behaviour indicators from log file data with contextual variables collected via questionnaires. In this work, we apply a latent regression model to analyse the relationship between latent constructs (i.e., performance, speed, and…
Descriptors: Achievement Tests, Secondary School Students, International Assessment, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Yamamoto, Kentaro; Lennon, Mary Louise – Quality Assurance in Education: An International Perspective, 2018
Purpose: Fabricated data jeopardize the reliability of large-scale population surveys and reduce the comparability of such efforts by destroying the linkage between data and measurement constructs. Such data result in the loss of comparability across participating countries and, in the case of cyclical surveys, between past and present surveys.…
Descriptors: Measurement, Deception, Data, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Yamamoto, Kentaro; Shin, Hyo Jeong; Khorramdel, Lale – Educational Measurement: Issues and Practice, 2018
A multistage adaptive testing (MST) design was implemented for the Programme for the International Assessment of Adult Competencies (PIAAC) starting in 2012 for about 40 countries and has been implemented for the 2018 cycle of the Programme for International Student Assessment (PISA) for more than 80 countries. Using examples from PISA and PIAAC,…
Descriptors: International Assessment, Foreign Countries, Achievement Tests, Test Validity
OECD Publishing, 2017
Solving unfamiliar problems on one's own is important, but in today's increasingly interconnected world, people are often required to collaborate in order to achieve their goals. Teamwork has numerous benefits, from a diverse range of opinions to synergies among team members, and assigning tasks to those who are best suited to them. Collaboration…
Descriptors: Achievement Tests, Foreign Countries, International Assessment, Secondary School Students
Yang, Zhitong – ProQuest LLC, 2019
Computer-based assessments allow practitioners to collect rich process data by logging students' interactions with assessment tasks. In addition to providing final responses to test questions, computer-based assessments promise to furnish more evidence to support claims about what a student knows and can do through logging process data in log…
Descriptors: Problem Solving, Computer Assisted Testing, Data Analysis, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Teig, Nani; Scherer, Ronny; Kjaernsli, Marit – Journal of Research in Science Teaching, 2020
Previous research has demonstrated the potential of examining log-file data from computer-based assessments to understand student interactions with complex inquiry tasks. Rather than solely providing information about what has been achieved or the accuracy of student responses ("product data"), students' log files offer additional…
Descriptors: Science Process Skills, Thinking Skills, Inquiry, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Scoular, Claire; Eleftheriadou, Sofia; Ramalingam, Dara; Cloney, Dan – Australian Journal of Education, 2020
Collaboration is a complex skill, comprised of multiple subskills, that is of growing interest to policy makers, educators and researchers. Several definitions and frameworks have been described in the literature to support assessment of collaboration; however, the inherent structure of the construct still needs better definition. In 2015, the…
Descriptors: Cooperative Learning, Problem Solving, Computer Assisted Testing, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lehane, Paula; Scully, Darina; O'Leary, Michael – Irish Educational Studies, 2022
In line with the widespread proliferation of digital technology in everyday life, many countries are now beginning to use computer-based exams (CBEs) in their post-primary education systems. To ensure that these CBEs are delivered in a manner that preserves their fairness, validity, utility and credibility, several factors pertaining to their…
Descriptors: Computer Assisted Testing, Secondary School Students, Culture Fair Tests, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Høvsgaard Maguire, Laura – Discourse: Studies in the Cultural Politics of Education, 2019
Algorithmic practices are becoming increasingly more central within educational governance. By focusing on the mechanisms of a particular algorithmic testing system in Denmark, this paper highlights how such practices are implicated in the emergence of new accountability infrastructures. It adopts an STS approach drawing specifically upon Michel…
Descriptors: Educational Administration, Foreign Countries, Accountability, National Competency Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Gao, Lingyun – Applied Measurement in Education, 2017
There has been an increased interest in the impact of unmotivated test taking on test performance and score validity. This has led to the development of new ways of measuring test-taking effort based on item response time. In particular, Response Time Effort (RTE) has been shown to provide an assessment of effort down to the level of individual…
Descriptors: Test Bias, Computer Assisted Testing, Item Response Theory, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kuo, Bor-Chen; Liao, Chen-Huei; Pai, Kai-Chih; Shih, Shu-Chuan; Li, Cheng-Hsuan; Mok, Magdalena Mo Ching – Educational Psychology, 2020
The current study explores students' collaboration and problem solving (CPS) abilities using a human-to-agent (H-A) computer-based collaborative problem solving assessment. Five CPS assessment units with 76 conversation-based items were constructed using the PISA 2015 CPS framework. In the experiment, 53,855 ninth and tenth graders in Taiwan were…
Descriptors: Computer Assisted Testing, Cooperative Learning, Problem Solving, Item Response Theory
Csapó, Beno, Ed.; Funke, Joachim, Ed. – OECD Publishing, 2017
Solving non-routine problems is a key competence in a world full of changes, uncertainty and surprise where we strive to achieve so many ambitious goals. But the world is also full of solutions because of the extraordinary competences of humans who search for and find them. We must explore the world around us in a thoughtful way, acquire knowledge…
Descriptors: Problem Solving, Achievement Tests, Foreign Countries, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Zehner, Fabian; Goldhammer, Frank; Lubaway, Emily; Sälzer, Christine – Education Inquiry, 2019
In 2015, the "Programme for International Student Assessment" (PISA) introduced multiple changes in its study design, the most extensive being the transition from paper- to computer-based assessment. We investigated the differences between German students' text responses to eight reading items from the paper-based study in 2012 to text…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias – ETS Research Report Series, 2017
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Pages: 1  |  2  |  3  |  4