Publication Date
In 2025 | 2 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 6 |
Descriptor
Monte Carlo Methods | 6 |
Sample Size | 6 |
Factor Analysis | 3 |
Goodness of Fit | 3 |
Item Response Theory | 3 |
Test Items | 3 |
Test Length | 3 |
Accuracy | 2 |
Computation | 2 |
Error of Measurement | 2 |
Statistical Analysis | 2 |
More ▼ |
Source
International Journal of… | 6 |
Author
Abdullah Faruk Kiliç | 1 |
Baris Pekmezci, Fulya | 1 |
Basman, Munevver | 1 |
Fatih Orcan | 1 |
Fatih Orçan | 1 |
Sengul Avsar, Asiye | 1 |
Simsek, Ahmet Salih | 1 |
Tugay Kaçak | 1 |
Publication Type
Journal Articles | 6 |
Reports - Research | 6 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Tugay Kaçak; Abdullah Faruk Kiliç – International Journal of Assessment Tools in Education, 2025
Researchers continue to choose PCA in scale development and adaptation studies because it is the default setting and overestimates measurement quality. When PCA is utilized in investigations, the explained variance and factor loadings can be exaggerated. PCA, in contrast to the models given in the literature, should be investigated in…
Descriptors: Factor Analysis, Monte Carlo Methods, Mathematical Models, Sample Size
Fatih Orçan – International Journal of Assessment Tools in Education, 2025
Factor analysis is a statistical method to explore the relationships among observed variables and identify latent structures. It is crucial in scale development and validity analysis. Key factors affecting the accuracy of factor analysis results include the type of data, sample size, and the number of response categories. While some studies…
Descriptors: Factor Analysis, Factor Structure, Item Response Theory, Sample Size
Basman, Munevver – International Journal of Assessment Tools in Education, 2023
To ensure the validity of the tests is to check that all items have similar results across different groups of individuals. However, differential item functioning (DIF) occurs when the results of individuals with equal ability levels from different groups differ from each other on the same test item. Based on Item Response Theory and Classic Test…
Descriptors: Test Bias, Test Items, Test Validity, Item Response Theory
Fatih Orcan – International Journal of Assessment Tools in Education, 2023
Among all, Cronbach's Alpha and McDonald's Omega are commonly used for reliability estimations. The alpha uses inter-item correlations while omega is based on a factor analysis result. This study uses simulated ordinal data sets to test whether the alpha and omega produce different estimates. Their performances were compared according to the…
Descriptors: Statistical Analysis, Monte Carlo Methods, Correlation, Factor Analysis
Baris Pekmezci, Fulya; Sengul Avsar, Asiye – International Journal of Assessment Tools in Education, 2021
There is a great deal of research about item response theory (IRT) conducted by simulations. Item and ability parameters are estimated with varying numbers of replications under different test conditions. However, it is not clear what the appropriate number of replications should be. The aim of the current study is to develop guidelines for the…
Descriptors: Item Response Theory, Computation, Accuracy, Monte Carlo Methods
Simsek, Ahmet Salih – International Journal of Assessment Tools in Education, 2023
Likert-type item is the most popular response format for collecting data in social, educational, and psychological studies through scales or questionnaires. However, there is no consensus on whether parametric or non-parametric tests should be preferred when analyzing Likert-type data. This study examined the statistical power of parametric and…
Descriptors: Error of Measurement, Likert Scales, Nonparametric Statistics, Statistical Analysis