Publication Date
In 2025 | 0 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 13 |
Since 2016 (last 10 years) | 19 |
Since 2006 (last 20 years) | 26 |
Descriptor
Error of Measurement | 35 |
Factor Analysis | 35 |
Test Items | 35 |
Factor Structure | 13 |
Foreign Countries | 11 |
Item Analysis | 10 |
Item Response Theory | 10 |
Correlation | 8 |
Goodness of Fit | 8 |
Comparative Analysis | 7 |
Sample Size | 7 |
More ▼ |
Source
Author
Finch, Holmes | 2 |
Abdolvahab Khademi | 1 |
Ahn, Soyeon | 1 |
Aimé, Annie | 1 |
Alamri, Abeer A. | 1 |
Alci, Devrim | 1 |
Alhija, Fadia Nasser-Abu | 1 |
Andersson, Björn | 1 |
Anke M. Scheeren | 1 |
Anwyll, Steve | 1 |
Benson, Jeri | 1 |
More ▼ |
Publication Type
Reports - Research | 32 |
Journal Articles | 27 |
Speeches/Meeting Papers | 6 |
Reports - Descriptive | 2 |
Reports - Evaluative | 1 |
Education Level
Elementary Education | 5 |
Secondary Education | 3 |
Grade 7 | 2 |
Higher Education | 2 |
Postsecondary Education | 2 |
Grade 3 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Researchers | 3 |
Location
Turkey | 2 |
United Kingdom (England) | 2 |
Canada | 1 |
France | 1 |
Georgia | 1 |
Germany | 1 |
Iran | 1 |
Kuwait | 1 |
Malaysia | 1 |
Netherlands | 1 |
Saudi Arabia | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Computer Anxiety Scale | 1 |
Metropolitan Achievement Tests | 1 |
Program for International… | 1 |
Students Evaluation of… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Stephanie M. Bell; R. Philip Chalmers; David B. Flora – Educational and Psychological Measurement, 2024
Coefficient omega indices are model-based composite reliability estimates that have become increasingly popular. A coefficient omega index estimates how reliably an observed composite score measures a target construct as represented by a factor in a factor-analysis model; as such, the accuracy of omega estimates is likely to depend on correct…
Descriptors: Influences, Models, Measurement Techniques, Reliability
Abdolvahab Khademi; Craig S. Wells; Maria Elena Oliveri; Ester Villalonga-Olives – SAGE Open, 2023
The most common effect size when using a multiple-group confirmatory factor analysis approach to measurement invariance is [delta]CFI and [delta]TLI with a cutoff value of 0.01. However, this recommended cutoff value may not be ubiquitously appropriate and may be of limited application for some tests (e.g., measures using dichotomous items or…
Descriptors: Factor Analysis, Factor Structure, Error of Measurement, Test Items
Hyunjung Lee; Heining Cham – Educational and Psychological Measurement, 2024
Determining the number of factors in exploratory factor analysis (EFA) is crucial because it affects the rest of the analysis and the conclusions of the study. Researchers have developed various methods for deciding the number of factors to retain in EFA, but this remains one of the most difficult decisions in the EFA. The purpose of this study is…
Descriptors: Factor Structure, Factor Analysis, Monte Carlo Methods, Goodness of Fit
Ferrando, Pere J.; Navarro-González, David – Educational and Psychological Measurement, 2021
Item response theory "dual" models (DMs) in which both items and individuals are viewed as sources of differential measurement error so far have been proposed only for unidimensional measures. This article proposes two multidimensional extensions of existing DMs: the M-DTCRM (dual Thurstonian continuous response model), intended for…
Descriptors: Item Response Theory, Error of Measurement, Models, Factor Analysis
Sahin Kursad, Merve; Cokluk Bokeoglu, Omay; Cikrikci, Rahime Nukhet – International Journal of Assessment Tools in Education, 2022
Item parameter drift (IPD) is the systematic differentiation of parameter values of items over time due to various reasons. If it occurs in computer adaptive tests (CAT), it causes errors in the estimation of item and ability parameters. Identification of the underlying conditions of this situation in CAT is important for estimating item and…
Descriptors: Item Analysis, Computer Assisted Testing, Test Items, Error of Measurement
Mumba, Brian; Alci, Devrim; Uzun, N. Bilge – Journal on Educational Psychology, 2022
Assessment of measurement invariance is an essential component of construct validity in psychological measurement. However, the procedure for assessing measurement invariance with dichotomous items partially differs from that of invariance testing with continuous items. However, many studies have focused on invariance testing with continuous items…
Descriptors: Mathematics Tests, Test Items, Foreign Countries, Error of Measurement
Karina Mostert; Clarisse van Rensburg; Reitumetse Machaba – Journal of Applied Research in Higher Education, 2024
Purpose: This study examined the psychometric properties of intention to drop out and study satisfaction measures for first-year South African students. The factorial validity, item bias, measurement invariance and reliability were tested. Design/methodology/approach: A cross-sectional design was used. For the study on intention to drop out, 1,820…
Descriptors: Intention, Potential Dropouts, Student Satisfaction, Test Items
Zhong Jian Chee; Anke M. Scheeren; Marieke de Vries – Autism: The International Journal of Research and Practice, 2024
Despite several psychometric advantages over the 50-item Autism Spectrum Quotient, an instrument used to measure autistic traits, the abridged AQ-28 and its cross-cultural validity have not been examined as extensively. Therefore, this study aimed to examine the factor structure and measurement invariance of the AQ-28 in 818 Dutch (M[subscript…
Descriptors: Autism Spectrum Disorders, Questionnaires, Factor Structure, Factor Analysis
Chen, Chia-Wen; Andersson, Björn; Zhu, Jinxin – Journal of Educational Measurement, 2023
The certainty of response index (CRI) measures respondents' confidence level when answering an item. In conjunction with the answers to the items, previous studies have used descriptive statistics and arbitrary thresholds to identify student knowledge profiles with the CRIs. Whereas this approach overlooked the measurement error of the observed…
Descriptors: Item Response Theory, Factor Analysis, Psychometrics, Test Items
Wang, Chun; Zhang, Xue – Grantee Submission, 2019
The relations among alternative parameterizations of the binary factor analysis (FA) model and two-parameter logistic (2PL) item response theory (IRT) model have been thoroughly discussed in literature (e.g., Lord & Novick, 1968; Takane & de Leeuw, 1987; McDonald, 1999; Wirth & Edwards, 2007; Kamata & Bauer, 2008). However, the…
Descriptors: Test Items, Error of Measurement, Item Response Theory, Factor Analysis
Rujun Xu; James Soland – International Journal of Testing, 2024
International surveys are increasingly being used to understand nonacademic outcomes like math and science motivation, and to inform education policy changes within countries. Such instruments assume that the measure works consistently across countries, ethnicities, and languages--that is, they assume measurement invariance. While studies have…
Descriptors: Surveys, Statistical Bias, Achievement Tests, Foreign Countries
Maïano, Christophe; Thibault, Isabelle; Dreiskämper, Dennis; Henning, Lena; Tietjens, Maike; Aimé, Annie – Measurement in Physical Education and Exercise Science, 2023
The present study sought to examine the psychometric properties of the French and German versions of the Physical Self-Concept Questionnaire for Elementary School Children-Revised (PSCQ-C-R). A sample of 519 children participated in this study. Of those, 197 were French-Canadian and 322 were German. Results support the factor validity and…
Descriptors: Elementary School Students, Self Concept, Human Body, Questionnaires
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Kilic, Abdullah Faruk; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Weighted least squares (WLS), weighted least squares mean-and-variance-adjusted (WLSMV), unweighted least squares mean-and-variance-adjusted (ULSMV), maximum likelihood (ML), robust maximum likelihood (MLR) and Bayesian estimation methods were compared in mixed item response type data via Monte Carlo simulation. The percentage of polytomous items,…
Descriptors: Factor Analysis, Computation, Least Squares Statistics, Maximum Likelihood Statistics
DiStefano, Christine; McDaniel, Heather L.; Zhang, Liyun; Shi, Dexin; Jiang, Zhehan – Educational and Psychological Measurement, 2019
A simulation study was conducted to investigate the model size effect when confirmatory factor analysis (CFA) models include many ordinal items. CFA models including between 15 and 120 ordinal items were analyzed with mean- and variance-adjusted weighted least squares to determine how varying sample size, number of ordered categories, and…
Descriptors: Factor Analysis, Effect Size, Data, Sample Size