Publication Date
In 2025 | 2 |
Since 2024 | 7 |
Since 2021 (last 5 years) | 15 |
Since 2016 (last 10 years) | 22 |
Since 2006 (last 20 years) | 28 |
Descriptor
Error of Measurement | 33 |
Factor Analysis | 33 |
Item Analysis | 33 |
Comparative Analysis | 11 |
Foreign Countries | 10 |
Item Response Theory | 10 |
Test Items | 10 |
Scores | 9 |
Gender Differences | 8 |
Correlation | 7 |
Psychometrics | 7 |
More ▼ |
Source
Author
Publication Type
Reports - Research | 27 |
Journal Articles | 26 |
Speeches/Meeting Papers | 3 |
Dissertations/Theses -… | 2 |
Reports - Evaluative | 2 |
Tests/Questionnaires | 2 |
Reports - Descriptive | 1 |
Education Level
Elementary Education | 5 |
Higher Education | 5 |
Elementary Secondary Education | 4 |
Secondary Education | 4 |
Middle Schools | 3 |
Postsecondary Education | 3 |
Grade 4 | 2 |
Grade 5 | 2 |
Grade 8 | 2 |
Intermediate Grades | 2 |
Junior High Schools | 2 |
More ▼ |
Audience
Researchers | 2 |
Location
Greece | 1 |
Indonesia | 1 |
Israel | 1 |
Mississippi | 1 |
Portugal | 1 |
Saudi Arabia | 1 |
South Korea | 1 |
Spain | 1 |
Sudan | 1 |
United Kingdom (England) | 1 |
Zambia | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Stephanie M. Bell; R. Philip Chalmers; David B. Flora – Educational and Psychological Measurement, 2024
Coefficient omega indices are model-based composite reliability estimates that have become increasingly popular. A coefficient omega index estimates how reliably an observed composite score measures a target construct as represented by a factor in a factor-analysis model; as such, the accuracy of omega estimates is likely to depend on correct…
Descriptors: Influences, Models, Measurement Techniques, Reliability
Hoang V. Nguyen; Niels G. Waller – Educational and Psychological Measurement, 2024
We conducted an extensive Monte Carlo study of factor-rotation local solutions (LS) in multidimensional, two-parameter logistic (M2PL) item response models. In this study, we simulated more than 19,200 data sets that were drawn from 96 model conditions and performed more than 7.6 million rotations to examine the influence of (a) slope parameter…
Descriptors: Monte Carlo Methods, Item Response Theory, Correlation, Error of Measurement
Klauth, Bo – ProQuest LLC, 2023
In conducting confirmatory factor analysis with ordered response items, the literature suggests that when the number of responses is five and item skewness (IS) is approximately normal, researchers can employ maximum likelihood with robust standard errors (MLR). However, MLR can yield biased factor loadings (FL) and FL standard errors (FLSE) when…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Error of Measurement
R. Noah Padgett – Practical Assessment, Research & Evaluation, 2023
The consistency of psychometric properties across waves of data collection provides valuable evidence that scores can be interpreted consistently. Evidence supporting the consistency of psychometric properties can come from using a longitudinal extension of item factor analysis to account for the lack of independence of observation when evaluating…
Descriptors: Psychometrics, Factor Analysis, Item Analysis, Validity
E. Damiano D'Urso; Jesper Tijmstra; Jeroen K. Vermunt; Kim De Roover – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Measurement invariance (MI) is required for validly comparing latent constructs measured by multiple ordinal self-report items. Non-invariances may occur when disregarding (group differences in) an acquiescence response style (ARS; an agreeing tendency regardless of item content). If non-invariance results solely from neglecting ARS, one should…
Descriptors: Error of Measurement, Structural Equation Models, Construct Validity, Measurement Techniques
Yuanfang Liu; Mark H. C. Lai; Ben Kelcey – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Measurement invariance holds when a latent construct is measured in the same way across different levels of background variables (continuous or categorical) while controlling for the true value of that construct. Using Monte Carlo simulation, this paper compares the multiple indicators, multiple causes (MIMIC) model and MIMIC-interaction to a…
Descriptors: Classification, Accuracy, Error of Measurement, Correlation
Pere J. Ferrando; David Navarro-González; Fabia Morales-Vives – Educational and Psychological Measurement, 2025
The problem of local item dependencies (LIDs) is very common in personality and attitude measures, particularly in those that measure narrow-bandwidth dimensions. At the structural level, these dependencies can be modeled by using extended factor analytic (FA) solutions that include correlated residuals. However, the effects that LIDs have on the…
Descriptors: Scores, Accuracy, Evaluation Methods, Factor Analysis
Sahin Kursad, Merve; Cokluk Bokeoglu, Omay; Cikrikci, Rahime Nukhet – International Journal of Assessment Tools in Education, 2022
Item parameter drift (IPD) is the systematic differentiation of parameter values of items over time due to various reasons. If it occurs in computer adaptive tests (CAT), it causes errors in the estimation of item and ability parameters. Identification of the underlying conditions of this situation in CAT is important for estimating item and…
Descriptors: Item Analysis, Computer Assisted Testing, Test Items, Error of Measurement
Mumba, Brian; Alci, Devrim; Uzun, N. Bilge – Journal on Educational Psychology, 2022
Assessment of measurement invariance is an essential component of construct validity in psychological measurement. However, the procedure for assessing measurement invariance with dichotomous items partially differs from that of invariance testing with continuous items. However, many studies have focused on invariance testing with continuous items…
Descriptors: Mathematics Tests, Test Items, Foreign Countries, Error of Measurement
Strong, John Z. – Reading & Writing Quarterly, 2023
Awareness of informational text structures is related to reading comprehension and varies according to characteristics of readers and texts. The purpose of this study was to develop and refine a measure of text structure awareness, the Text Structure Identification Test (TSIT), by investigating its internal consistency reliability and construct…
Descriptors: Text Structure, Reading Instruction, Construct Validity, Grade 4
Liu, Yixing; Thompson, Marilyn S. – Journal of Experimental Education, 2022
A simulation study was conducted to explore the impact of differential item functioning (DIF) on general factor difference estimation for bifactor, ordinal data. Common analysis misspecifications in which the generated bifactor data with DIF were fitted using models with equality constraints on noninvariant item parameters were compared under data…
Descriptors: Comparative Analysis, Item Analysis, Sample Size, Error of Measurement
John B. Buncher; Jayson M. Nissen; Ben Van Dusen; Robert M. Talbot – Physical Review Physics Education Research, 2025
Research-based assessments (RBAs) allow researchers and practitioners to compare student performance across different contexts and institutions. In recent years, research attention has focused on the student populations these RBAs were initially developed with because much of that research was done with "samples of convenience" that were…
Descriptors: Science Tests, Physics, Comparative Analysis, Gender Differences
Rujun Xu; James Soland – International Journal of Testing, 2024
International surveys are increasingly being used to understand nonacademic outcomes like math and science motivation, and to inform education policy changes within countries. Such instruments assume that the measure works consistently across countries, ethnicities, and languages--that is, they assume measurement invariance. While studies have…
Descriptors: Surveys, Statistical Bias, Achievement Tests, Foreign Countries
Kritika Thapa – ProQuest LLC, 2023
Measurement invariance is crucial for making valid comparisons across different groups (Kline, 2016; Vandenberg, 2002). To address the challenges associated with invariance testing such as large sample size requirements, the complexity of the model, etc., applied researchers have incorporated parcels. Parcels have been shown to alleviate skewness,…
Descriptors: Elementary Secondary Education, Achievement Tests, Foreign Countries, International Assessment
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods