Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 12 |
Since 2006 (last 20 years) | 28 |
Descriptor
Source
Author
Publication Type
Education Level
Elementary Secondary Education | 19 |
Grade 8 | 9 |
Grade 4 | 8 |
Elementary Education | 7 |
Secondary Education | 7 |
Intermediate Grades | 4 |
Junior High Schools | 4 |
Middle Schools | 4 |
High Schools | 3 |
Grade 9 | 2 |
Grade 12 | 1 |
More ▼ |
Audience
Researchers | 1 |
Location
United States | 6 |
Botswana | 2 |
Canada | 2 |
Ireland | 2 |
Asia | 1 |
Australia | 1 |
Chile | 1 |
Georgia Republic | 1 |
Germany | 1 |
Honduras | 1 |
Malaysia | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
von Davier, Matthias; Tyack, Lillian; Khorramdel, Lale – Educational and Psychological Measurement, 2023
Automated scoring of free drawings or images as responses has yet to be used in large-scale assessments of student achievement. In this study, we propose artificial neural networks to classify these types of graphical responses from a TIMSS 2019 item. We are comparing classification accuracy of convolutional and feed-forward approaches. Our…
Descriptors: Scoring, Networks, Artificial Intelligence, Elementary Secondary Education
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2021
This research proposes a new statistic for testing latent variable distribution fit for unidimensional item response theory (IRT) models. If the typical assumption of normality is violated, then item parameter estimates will be biased, and dependent quantities such as IRT score estimates will be adversely affected. The proposed statistic compares…
Descriptors: Item Response Theory, Simulation, Scores, Comparative Analysis
Dirlik, Ezgi Mor – International Journal of Progressive Education, 2019
Item response theory (IRT) has so many advantages than its precedent Classical Test Theory (CTT) such as non-changing item parameters, ability parameter estimations free from the items. However, in order to get these advantages, some assumptions should be met and they are; unidimensionality, normality and local independence. However, it is not…
Descriptors: Comparative Analysis, Nonparametric Statistics, Item Response Theory, Models
Gökçe, Semirhan; Berberoglu, Giray; Wells, Craig S.; Sireci, Stephen G. – Journal of Psychoeducational Assessment, 2021
The 2015 Trends in International Mathematics and Science Study (TIMSS) involved 57 countries and 43 different languages to assess students' achievement in mathematics and science. The purpose of this study is to evaluate whether items and test scores are affected as the differences between language families and cultures increase. Using…
Descriptors: Language Classification, Elementary Secondary Education, Mathematics Achievement, Mathematics Tests
Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education
Hastedt, Dirk; Desa, Deana – Practical Assessment, Research & Evaluation, 2015
This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs) such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average…
Descriptors: Case Studies, Simulation, International Programs, Testing Programs
Cheong, Yuk Fai; Kamata, Akihito – Applied Measurement in Education, 2013
In this article, we discuss and illustrate two centering and anchoring options available in differential item functioning (DIF) detection studies based on the hierarchical generalized linear and generalized linear mixed modeling frameworks. We compared and contrasted the assumptions of the two options, and examined the properties of their DIF…
Descriptors: Test Bias, Hierarchical Linear Modeling, Comparative Analysis, Test Items
Choi, Kyong Mi; Lee, Young-Sun; Park, Yoon Soo – EURASIA Journal of Mathematics, Science & Technology Education, 2015
International trended assessments have long attempted to provide instructional information to educational researchers and classroom teachers. Studies have shown that traditional methods of item analysis have not provided specific information that can be directly applicable to improve student performance. To this end, cognitive diagnosis models…
Descriptors: International Assessment, Mathematics Tests, Grade 8, Models
Jin, Ying; Kang, Minsoo – Large-scale Assessments in Education, 2016
Background: The current study compared four differential item functioning (DIF) methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects) simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic…
Descriptors: Comparative Analysis, Test Bias, Simulation, Regression (Statistics)
Suter, Larry E. – Research in Comparative and International Education, 2017
The international comparative studies in 1959 were conducted by International Association for the Evaluation of Educational Achievement (IEA) researchers who recognized that differences in student achievement measures in mathematics across countries could be caused by differences in curricula. The measurements of opportunity to learn (OTL) grew…
Descriptors: International Studies, Cross Cultural Studies, Mathematics Achievement, Educational Opportunities
Wagemaker, Hans, Ed. – International Association for the Evaluation of Educational Achievement, 2020
Although International Association for the Evaluation of Educational Achievement-pioneered international large-scale assessment (ILSA) of education is now a well-established science, non-practitioners and many users often substantially misunderstand how large-scale assessments are conducted, what questions and challenges they are designed to…
Descriptors: International Assessment, Achievement Tests, Educational Assessment, Comparative Analysis
Liu, Yan; Zumbo, Bruno D.; Gustafson, Paul; Huang, Yi; Kroc, Edward; Wu, Amery D. – Practical Assessment, Research & Evaluation, 2016
A variety of differential item functioning (DIF) methods have been proposed and used for ensuring that a test is fair to all test takers in a target population in the situations of, for example, a test being translated to other languages. However, once a method flags an item as DIF, it is difficult to conclude that the grouping variable (e.g.,…
Descriptors: Test Items, Test Bias, Probability, Scores
Singer, Judith D., Ed.; Braun, Henry I., Ed.; Chudowsky, Naomi, Ed. – National Academy of Education, 2018
Results from international large-scale assessments (ILSAs) garner considerable attention in the media, academia, and among policy makers. Although there is widespread recognition that ILSAs can provide useful information, there is debate about what types of comparisons are the most meaningful and what could be done to assure more sound…
Descriptors: International Education, Educational Assessment, Educational Policy, Data Interpretation
Long, Caroline; Wendt, Heike – African Journal of Research in Mathematics, Science and Technology Education, 2017
South Africa participated in TIMSS from 1995 to 2015. Over these two decades, some positive changes have been reported on the aggregated mathematics performance patterns of South African learners. This paper focuses on the achievement patterns of South Africa's high-performing Grade 9 learners (n = 3378) in comparison with similar subsamples of…
Descriptors: Foreign Countries, Comparative Analysis, Multiplication, Comparative Education
Nixon, Ryan S.; Barth, Katie N. – School Science and Mathematics, 2014
The results of international assessments such as the Trends in International Mathematics and Science Study (TIMSS) are often reported as rankings of nations. Focusing solely on national rank can result in invalid inferences about the relative quality of educational systems that can, in turn, lead to negative consequences for teachers and students.…
Descriptors: Comparative Analysis, Test Items, Data Analysis, Inferences
Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole – Journal of Educational Measurement, 2016
Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…
Descriptors: Comparative Analysis, Measurement, Test Bias, Simulation