NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 97 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Gustafsson, Martin; Barakat, Bilal Fouad – Comparative Education Review, 2023
International assessments inform education policy debates, yet little is known about their floor effects: To what extent do they fail to differentiate between the lowest performers, and what are the implications of this? TIMSS, SACMEQ, and LLECE data are analyzed to answer this question. In TIMSS, floor effects have been reduced through the…
Descriptors: Achievement Tests, Elementary Secondary Education, International Assessment, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qiwei He – International Journal of Assessment Tools in Education, 2023
Collaborative problem solving (CPS) is inherently an interactive, conjoint, dual-strand process that considers how a student reasons about a problem as well as how s/he interacts with others to regulate social processes and exchange information (OECD, 2013). Measuring CPS skills presents a challenge for obtaining consistent, accurate, and reliable…
Descriptors: Cooperative Learning, Problem Solving, Test Items, International Assessment
Emma Walland – Research Matters, 2024
GCSE examinations (taken by students aged 16 years in England) are not intended to be speeded (i.e. to be partly a test of how quickly students can answer questions). However, there has been little research exploring this. The aim of this research was to explore the speededness of past GCSE written examinations, using only the data from scored…
Descriptors: Educational Change, Test Items, Item Analysis, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
B. Goecke; S. Weiss; B. Barbot – Journal of Creative Behavior, 2025
The present paper questions the content validity of the eight creativity-related self-report scales available in PISA 2022's context questionnaire and provides a set of considerations for researchers interested in using these indexes. Specifically, we point out some threats to the content validity of these scales (e.g., "creative thinking…
Descriptors: Creativity, Creativity Tests, Questionnaires, Content Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Selcuk Acar; Yuyang Shen – Journal of Creative Behavior, 2025
Creativity tests, like creativity itself, vary widely in their structure and use. These differences include instructions, test duration, environments, prompt and response modalities, and the structure of test items. A key factor is task structure, referring to the specificity of the number of responses requested for a given prompt. Classic…
Descriptors: Creativity, Creative Thinking, Creativity Tests, Task Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lawrence T. DeCarlo – Educational and Psychological Measurement, 2024
A psychological framework for different types of items commonly used with mixed-format exams is proposed. A choice model based on signal detection theory (SDT) is used for multiple-choice (MC) items, whereas an item response theory (IRT) model is used for open-ended (OE) items. The SDT and IRT models are shown to share a common conceptualization…
Descriptors: Test Format, Multiple Choice Tests, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-Hwane; Khorramdel, Lale; Yamamoto, Kentaro; Shin, Hyo Jeong; Robin, Frederic – Educational Measurement: Issues and Practice, 2021
In Programme for International Student Assessment (PISA), item response theory (IRT) scaling is used to examine the psychometric properties of items and scales and to provide comparable test scores across participating countries and over time. To balance the comparability of IRT item parameter estimations across countries with the best possible…
Descriptors: Foreign Countries, International Assessment, Achievement Tests, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yuan; Hau, Kit-Tai – Educational and Psychological Measurement, 2020
In large-scale low-stake assessment such as the Programme for International Student Assessment (PISA), students may skip items (missingness) which are within their ability to complete. The detection and taking care of these noneffortful responses, as a measure of test-taking motivation, is an important issue in modern psychometric models.…
Descriptors: Response Style (Tests), Motivation, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Gantt, Allison L.; Paoletti, Teo; Corven, Julien – International Journal of Science and Mathematics Education, 2023
Covariational reasoning (or the coordination of two dynamically changing quantities) is central to secondary STEM subjects, but research has yet to fully explore its applicability to elementary and middle-grade levels within various STEM fields. To address this need, we selected a globally referenced STEM assessment--the Trends in International…
Descriptors: Incidence, Abstract Reasoning, Mathematics Education, Science Education
Peer reviewed Peer reviewed
Direct linkDirect link
Rivas, Axel; Scasso, Martín Guillermo – Journal of Education Policy, 2021
Since 2000, the PISA test implemented by OECD has become the prime benchmark for international comparisons in education. The 2015 PISA edition introduced methodological changes that altered the nature of its results. PISA made no longer valid non-reached items of the final part of the test, assuming that those unanswered questions were more a…
Descriptors: Test Validity, Computer Assisted Testing, Foreign Countries, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items
Yamamoto, Kentaro; Shin, Hyo Jeong; Khorramdel, Lale – OECD Publishing, 2019
This paper describes and evaluates a multistage adaptive testing (MSAT) design that was implemented for the Programme for International Student Assessment (PISA) 2018 main survey for the major domain of Reading. Through a simulation study, recovery of item response theory model parameters and measurement precision were examined. The PISA 2018 MSAT…
Descriptors: Adaptive Testing, Test Construction, Achievement Tests, Foreign Countries
National Academies Press, 2022
The National Assessment of Educational Progress (NAEP) -- often called "The Nation's Report Card" -- is the largest nationally representative and continuing assessment of what students in public and private schools in the United States know and can do in various subjects and has provided policy makers and the public with invaluable…
Descriptors: Costs, Futures (of Society), National Competency Tests, Educational Trends
Peer reviewed Peer reviewed
Direct linkDirect link
Jerrim, John; Parker, Philip; Choi, Alvaro; Chmielewski, Anna Katyn; Sälzer, Christine; Shure, Nikki – Educational Measurement: Issues and Practice, 2018
The Programme for International Student Assessment (PISA) is an important international study of 15-olds' knowledge and skills. New results are released every 3 years, and have a substantial impact upon education policy. Yet, despite its influence, the methodology underpinning PISA has received significant criticism. Much of this criticism has…
Descriptors: Educational Assessment, Comparative Education, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Applied Measurement in Education, 2015
Whenever the purpose of measurement is to inform an inference about a student's achievement level, it is important that we be able to trust that the student's test score accurately reflects what that student knows and can do. Such trust requires the assumption that a student's test event is not unduly influenced by construct-irrelevant factors…
Descriptors: Achievement Tests, Scores, Validity, Test Items
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7