Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 19 |
Descriptor
Source
Author
von Davier, Matthias | 2 |
Asil, Mustafa | 1 |
Ayan, Cansu | 1 |
Baris Pekmezci, Fulya | 1 |
Berberoglu, Giray | 1 |
Bracey, Gerald W. | 1 |
Brown, Gavin T. L. | 1 |
Buchholz, Janine | 1 |
Cai, Li | 1 |
Camminatiello, Ida | 1 |
Carstensen, Claus H. | 1 |
More ▼ |
Publication Type
Journal Articles | 17 |
Reports - Research | 13 |
Reports - Evaluative | 5 |
Numerical/Quantitative Data | 2 |
Collected Works - Proceedings | 1 |
Opinion Papers | 1 |
Education Level
Audience
Location
Finland | 3 |
Turkey | 3 |
Australia | 2 |
China (Shanghai) | 2 |
United States | 2 |
Azerbaijan | 1 |
France | 1 |
Germany | 1 |
Germany (Berlin) | 1 |
Greece | 1 |
Indonesia | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 20 |
Trends in International… | 2 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Soysal, Sümeyra – Participatory Educational Research, 2023
Applying a measurement instrument developed in a specific country to other countries raise a critical and important question of interest in especially cross-cultural studies. Confirmatory factor analysis (CFA) is the most preferred and used method to examine the cross-cultural applicability of measurement tools. Although CFA is a sophisticated…
Descriptors: Generalization, Cross Cultural Studies, Measurement Techniques, Factor Analysis
Hodge, Kari J.; Morgan, Grant B. – Journal of Applied Testing Technology, 2020
The purpose of this study was to examine the use of a misspecified calibration model and its impact on proficiency classification. Monte Carlo simulation methods were employed to compare competing models when the true structure of the data is known (i.e., testlet conditions). The conditions used in the design (e.g., number of items, testlet to…
Descriptors: Item Response Theory, Accuracy, Decision Making, Classification
Uyar, Seyma – Eurasian Journal of Educational Research, 2020
Purpose: This study aimed to compare the performance of latent class differential item functioning (DIF) approach and IRT based DIF methods using manifest grouping. With this study, it was thought to draw attention to carry out latent class DIF studies in Turkey. The purpose of this study was to examine DIF in PISA 2015 science data set. Research…
Descriptors: Item Response Theory, Foreign Countries, Cross Cultural Studies, Item Analysis
Ayan, Cansu; Baris Pekmezci, Fulya – International Journal of Assessment Tools in Education, 2021
Testlets have advantages such as making it possible to measure higher-order thinking skills and saving time, which are accepted in the literature. For this reason, they have often been preferred in many implementations from in-class assessments to large-scale assessments. Because of increased usage of testlets, the following questions are…
Descriptors: Foreign Countries, International Assessment, Secondary School Students, Achievement Tests
Scoular, Claire; Eleftheriadou, Sofia; Ramalingam, Dara; Cloney, Dan – Australian Journal of Education, 2020
Collaboration is a complex skill, comprised of multiple subskills, that is of growing interest to policy makers, educators and researchers. Several definitions and frameworks have been described in the literature to support assessment of collaboration; however, the inherent structure of the construct still needs better definition. In 2015, the…
Descriptors: Cooperative Learning, Problem Solving, Computer Assisted Testing, Comparative Analysis
Ivanova, Alina; Kardanova, Elena; Merrell, Christine; Tymms, Peter; Hawker, David – Assessment in Education: Principles, Policy & Practice, 2018
Is it possible to compare the results in assessments of mathematics across countries with different curricula, traditions and age of starting school? As part of the iPIPS project, a Russian version of the iPIPS baseline assessment was developed and trial data were available from about 300 Russian children at the start and end of their first year…
Descriptors: Mathematics Instruction, Foreign Countries, Mathematics Tests, Item Response Theory
Casabianca, Jodi M.; Lewis, Charles – Journal of Educational and Behavioral Statistics, 2015
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
Descriptors: Item Response Theory, Maximum Likelihood Statistics, Computation, Comparative Analysis
Chen, Haiwen H.; von Davier, Matthias; Yamamoto, Kentaro; Kong, Nan – ETS Research Report Series, 2015
One major issue with large-scale assessments is that the respondents might give no responses to many items, resulting in less accurate estimations of both assessed abilities and item parameters. This report studies how the types of items affect the item-level nonresponse rates and how different methods of treating item-level nonresponses have an…
Descriptors: Achievement Tests, Foreign Countries, International Assessment, Secondary School Students
Debeer, Dries; Buchholz, Janine; Hartig, Johannes; Janssen, Rianne – Journal of Educational and Behavioral Statistics, 2014
In this article, the change in examinee effort during an assessment, which we will refer to as persistence, is modeled as an effect of item position. A multilevel extension is proposed to analyze hierarchically structured data and decompose the individual differences in persistence. Data from the 2009 Program of International Student Achievement…
Descriptors: Reading Tests, International Programs, Testing Programs, Individual Differences
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole – Journal of Educational Measurement, 2016
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Descriptors: Comparative Analysis, Measurement, Test Bias, Simulation
Yang, Ji Seung; Hansen, Mark; Cai, Li – Educational and Psychological Measurement, 2012
Traditional estimators of item response theory scale scores ignore uncertainty carried over from the item calibration process, which can lead to incorrect estimates of the standard errors of measurement (SEMs). Here, the authors review a variety of approaches that have been applied to this problem and compare them on the basis of their statistical…
Descriptors: Item Response Theory, Scores, Statistical Analysis, Comparative Analysis
Stiller, Jurik; Hartmann, Stefan; Mathesius, Sabrina; Straube, Philipp; Tiemann, Rüdiger; Nordmeier, Volkhard; Krüger, Dirk; Upmeier zu Belzen, Annette – Assessment & Evaluation in Higher Education, 2016
The aim of this study was to improve the criterion-related test score interpretation of a text-based assessment of scientific reasoning competencies in higher education by evaluating factors which systematically affect item difficulty. To provide evidence about the specific demands which test items of various difficulty make on pre-service…
Descriptors: Logical Thinking, Scientific Concepts, Difficulty Level, Test Items
Asil, Mustafa; Brown, Gavin T. L. – International Journal of Testing, 2016
The use of the Programme for International Student Assessment (PISA) across nations, cultures, and languages has been criticized. The key criticisms point to the linguistic and cultural biases potentially underlying the design of reading comprehension tests, raising doubts about the legitimacy of comparisons across economies. Our research focused…
Descriptors: Comparative Analysis, Reading Achievement, Achievement Tests, Secondary School Students
Oliveri, Maria Elena; Olson, Brent F.; Ercikan, Kadriye; Zumbo, Bruno D. – International Journal of Testing, 2012
In this study, the Canadian English and French versions of the Problem-Solving Measure of the Programme for International Student Assessment 2003 were examined to investigate their degree of measurement comparability at the item- and test-levels. Three methods of differential item functioning (DIF) were compared: parametric and nonparametric item…
Descriptors: Foreign Students, Test Bias, Speech Communication, Effect Size
Camminatiello, Ida; Gallo, Michele; Menini, Tullio – Journal of Applied Quantitative Methods, 2010
In 1997 the Organisation for Economic Co-operation and Development (OECD) launched the OECD Programme for International Student Assessment (PISA) for collecting information about 15-year-old students in participating countries. Our study analyses the PISA 2006 cognitive test for evaluating the Italian student performance in mathematics, reading…
Descriptors: Foreign Countries, Cognitive Tests, Item Response Theory, Mathematics Achievement
Previous Page | Next Page »
Pages: 1 | 2