NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 118 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Atar, Burcu; Atalay Kabasakal, Kubra; Kibrislioglu Uysal, Nermin – Journal of Experimental Education, 2023
The purpose of this study was to evaluate the population invariance of equating functions across country subgroups in TIMSS 2015 mathematics tests in relation to the raw-score distribution, DIF, and DTF. We used equipercentile and IRT observed-score equating methods. The results of the study indicate that there is a relationship between the…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
H. Cigdem Bulut; Okan Bulut; Ashley Clelland – Field Methods, 2025
In this study, we explored psychometric network analysis (PNA) as an alternative method for identifying item wording effects in self-report instruments. We examined the functioning of negatively worded items in the network structures of two math-related scales from the 2019 Trends in International Mathematics and Science Study (TIMSS); Students…
Descriptors: Psychometrics, Network Analysis, Identification, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eunsook; Joo, Seang-Hwane; Chun, Seokjoon; Alamri, Abeer; Lee, Philseok; Stark, Stephen – Journal of Experimental Education, 2022
Multilevel latent class analysis (MLCA) has been increasingly used to investigate unobserved population heterogeneity while taking into account data dependency. Nonparametric MLCA has gained much popularity due to the advantage of classifying both individuals and clusters into latent classes. This study demonstrated the need to relax the…
Descriptors: Nonparametric Statistics, Hierarchical Linear Modeling, Monte Carlo Methods, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Lawrence T. DeCarlo – Educational and Psychological Measurement, 2024
A psychological framework for different types of items commonly used with mixed-format exams is proposed. A choice model based on signal detection theory (SDT) is used for multiple-choice (MC) items, whereas an item response theory (IRT) model is used for open-ended (OE) items. The SDT and IRT models are shown to share a common conceptualization…
Descriptors: Test Format, Multiple Choice Tests, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Saatcioglu, Fatima Munevver; Sen, Sedat – International Journal of Testing, 2023
In this study, we illustrated an application of the confirmatory mixture IRT model for multidimensional tests. We aimed to examine the differences in student performance by domains with a confirmatory mixture IRT modeling approach. A three-dimensional and three-class model was analyzed by assuming content domains as dimensions and cognitive…
Descriptors: Item Response Theory, Foreign Countries, Elementary Secondary Education, Achievement Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Erturk, Zafer; Oyar, Esra – International Journal of Assessment Tools in Education, 2021
Studies aiming to make cross-cultural comparisons first should establish measurement invariance in the groups to be compared because results obtained from such comparisons may be artificial in the event that measurement invariance cannot be established. The purpose of this study is to investigate the measurement invariance of the data obtained…
Descriptors: International Assessment, Foreign Countries, Attitude Measures, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
van Laar, Saskia; Braeken, Johan – Journal of Educational Measurement, 2022
The low-stakes character of international large-scale educational assessments implies that a participating student might at times provide unrelated answers as if s/he was not even reading the items and choosing a response option randomly throughout. Depending on the severity of this invalid response behavior, interpretations of the assessment…
Descriptors: Achievement Tests, Elementary Secondary Education, International Assessment, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Saskia van Laar; Johan Braeken – International Journal of Testing, 2024
This study examined the impact of two questionnaire characteristics, scale position and questionnaire length, on the prevalence of random responders in the TIMSS 2015 eighth-grade student questionnaire. While there was no support for an absolute effect of questionnaire length, we did find a positive effect for scale position, with an increase of…
Descriptors: Middle School Students, Grade 8, Questionnaires, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Jinyan; Dong, Yaxin; Han, Chunwei; Wang, Xiaojun – Journal of Psychoeducational Assessment, 2023
Using expert reviews and item response theory (IRT), this study evaluated the language- and culture-related construct-irrelevant variance and reliability of the 2019 TIMSS sense of school belonging scale (SSBS) for grades 4 and 8. The five items of the SSBS, which were identical for both grades, were reviewed for the language- and culture-related…
Descriptors: Construct Validity, Test Reliability, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dirlik, Ezgi Mor – International Journal of Progressive Education, 2019
Item response theory (IRT) has so many advantages than its precedent Classical Test Theory (CTT) such as non-changing item parameters, ability parameter estimations free from the items. However, in order to get these advantages, some assumptions should be met and they are; unidimensionality, normality and local independence. However, it is not…
Descriptors: Comparative Analysis, Nonparametric Statistics, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Yildirim, Hüseyin H. – International Journal of Mathematical Education in Science and Technology, 2022
This study explored the mathematics achievement profiles of the countries participating in TIMSS 2015 based on the TIMSS assessment framework categories. More specifically, observed and expected performances of countries in the framework categories were compared to determine the relative strengths and weaknesses of countries in the respective item…
Descriptors: Foreign Countries, Achievement Tests, Elementary Secondary Education, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Hole, Arne; Grønmo, Liv Sissel; Onstad, Torgeir – Large-scale Assessments in Education, 2018
Background: This paper discusses a framework for analyzing the dependence on mathematical theory in test items, that is, a framework for discussing to what extent knowledge of mathematical theory is helpful for the student in solving the item. The framework can be applied to any test in which some knowledge of mathematical theory may be useful,…
Descriptors: International Assessment, Foreign Countries, Achievement Tests, Secondary School Students
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8