Publication Date
In 2025 | 0 |
Since 2024 | 7 |
Since 2021 (last 5 years) | 11 |
Since 2016 (last 10 years) | 24 |
Since 2006 (last 20 years) | 40 |
Descriptor
Correlation | 42 |
Error of Measurement | 42 |
Item Response Theory | 42 |
Test Items | 16 |
Comparative Analysis | 15 |
Models | 12 |
Item Analysis | 11 |
Scores | 11 |
Simulation | 10 |
Factor Analysis | 8 |
Monte Carlo Methods | 8 |
More ▼ |
Source
Author
Ahn, Soyeon | 2 |
Kelecioglu, Hülya | 2 |
Ailin Yuan | 1 |
Algina, James | 1 |
Andrews, Benjamin James | 1 |
Andrich, David | 1 |
Ankenmann, Robert D. | 1 |
Anwyll, Steve | 1 |
Asil, Mustafa | 1 |
Aydin, Burak | 1 |
Beidel, Deborah C. | 1 |
More ▼ |
Publication Type
Journal Articles | 32 |
Reports - Research | 28 |
Dissertations/Theses -… | 6 |
Reports - Evaluative | 6 |
Reports - Descriptive | 2 |
Speeches/Meeting Papers | 2 |
Education Level
Elementary Education | 3 |
Elementary Secondary Education | 3 |
Grade 4 | 2 |
Grade 7 | 2 |
Higher Education | 2 |
Postsecondary Education | 2 |
Secondary Education | 2 |
Grade 5 | 1 |
Grade 6 | 1 |
Grade 8 | 1 |
Intermediate Grades | 1 |
More ▼ |
Audience
Location
Australia | 2 |
China | 2 |
United Kingdom (England) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Seyma Erbay Mermer – Pegem Journal of Education and Instruction, 2024
This study aims to compare item and student parameters of dichotomously scored multidimensional constructs estimated based on unidimensional and multidimensional Item Response Theory (IRT) under different conditions of sample size, interdimensional correlation and number of dimensions. This research, conducted with simulations, is of a basic…
Descriptors: Item Response Theory, Correlation, Error of Measurement, Comparative Analysis
Xiaowen Liu – International Journal of Testing, 2024
Differential item functioning (DIF) often arises from multiple sources. Within the context of multidimensional item response theory, this study examined DIF items with varying secondary dimensions using the three DIF methods: SIBTEST, Mantel-Haenszel, and logistic regression. The effect of the number of secondary dimensions on DIF detection rates…
Descriptors: Item Analysis, Test Items, Item Response Theory, Correlation
Hoang V. Nguyen; Niels G. Waller – Educational and Psychological Measurement, 2024
We conducted an extensive Monte Carlo study of factor-rotation local solutions (LS) in multidimensional, two-parameter logistic (M2PL) item response models. In this study, we simulated more than 19,200 data sets that were drawn from 96 model conditions and performed more than 7.6 million rotations to examine the influence of (a) slope parameter…
Descriptors: Monte Carlo Methods, Item Response Theory, Correlation, Error of Measurement
Stefanie A. Wind; Yangmeng Xu – Educational Assessment, 2024
We explored three approaches to resolving or re-scoring constructed-response items in mixed-format assessments: rater agreement, person fit, and targeted double scoring (TDS). We used a simulation study to consider how the three approaches impact the psychometric properties of student achievement estimates, with an emphasis on person fit. We found…
Descriptors: Interrater Reliability, Error of Measurement, Evaluation Methods, Examiners
Huang, Qi; Bolt, Daniel M. – Educational and Psychological Measurement, 2023
Previous studies have demonstrated evidence of latent skill continuity even in tests intentionally designed for measurement of binary skills. In addition, the assumption of binary skills when continuity is present has been shown to potentially create a lack of invariance in item and latent ability parameters that may undermine applications. In…
Descriptors: Item Response Theory, Test Items, Skill Development, Robustness (Statistics)
Yuanfang Liu; Mark H. C. Lai; Ben Kelcey – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Measurement invariance holds when a latent construct is measured in the same way across different levels of background variables (continuous or categorical) while controlling for the true value of that construct. Using Monte Carlo simulation, this paper compares the multiple indicators, multiple causes (MIMIC) model and MIMIC-interaction to a…
Descriptors: Classification, Accuracy, Error of Measurement, Correlation
An Analysis of Differential Bundle Functioning in Multidimensional Tests Using the SIBTEST Procedure
Özdogan, Didem; Kelecioglu, Hülya – International Journal of Assessment Tools in Education, 2022
This study aims to analyze the differential bundle functioning in multidimensional tests with a specific purpose to detect this effect through differentiating the location of the item with DIF in the test, the correlation between the dimensions, the sample size, and the ratio of reference to focal group size. The first 10 items of the test that is…
Descriptors: Correlation, Sample Size, Test Items, Item Analysis
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Hosseinzadeh, Mostafa – ProQuest LLC, 2021
In real-world situations, multidimensional data may appear on large-scale tests or attitudinal surveys. A simple structure, multidimensional model may be used to evaluate the items, ignoring the cross-loading of some items on the secondary dimension. The purpose of this study was to investigate the influence of structure complexity magnitude of…
Descriptors: Item Response Theory, Models, Simulation, Evaluation Methods
Factor Structure and Psychometric Properties of the Digital Stress Scale in a Chinese College Sample
Chunlei Gao; Mingqing Jian; Ailin Yuan – SAGE Open, 2024
The Digital Stress Scale (DSS) is used to measure digital stress, which is the perceived stress and anxiety associated with social media use. In this study, the Chinese version of the DSS was validated using a sample of 721 Chinese college students, 321 males and 400 females (KMO = 0.923; Bartlett = 5,058.492, p < 0.001). Confirmatory factor…
Descriptors: Factor Structure, Factor Analysis, Psychometrics, Anxiety
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Sinharay, Sandip – Journal of Educational Measurement, 2018
The value-added method of Haberman is arguably one of the most popular methods to evaluate the quality of subscores. The method is based on the classical test theory and deems a subscore to be of added value if the subscore predicts the corresponding true subscore better than does the total score. Sinharay provided an interpretation of the added…
Descriptors: Scores, Value Added Models, Raw Scores, Item Response Theory
Bukhari, Nurliyana – ProQuest LLC, 2017
In general, newer educational assessments are deemed more demanding challenges than students are currently prepared to face. Two types of factors may contribute to the test scores: (1) factors or dimensions that are of primary interest to the construct or test domain; and, (2) factors or dimensions that are irrelevant to the construct, causing…
Descriptors: Item Response Theory, Models, Psychometrics, Computer Simulation
Lee, Woo-yeol; Cho, Sun-Joo – Journal of Educational Measurement, 2017
Cross-level invariance in a multilevel item response model can be investigated by testing whether the within-level item discriminations are equal to the between-level item discriminations. Testing the cross-level invariance assumption is important to understand constructs in multilevel data. However, in most multilevel item response model…
Descriptors: Test Items, Item Response Theory, Item Analysis, Simulation
Bichi, Ado Abdu; Talib, Rohaya – International Journal of Evaluation and Research in Education, 2018
Testing in educational system perform a number of functions, the results from a test can be used to make a number of decisions in education. It is therefore well accepted in the education literature that, testing is an important element of education. To effectively utilize the tests in educational policies and quality assurance its validity and…
Descriptors: Item Response Theory, Test Items, Test Construction, Decision Making