Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 6 |
Descriptor
Correlation | 9 |
Item Analysis | 9 |
Test Length | 9 |
Test Items | 6 |
Item Response Theory | 5 |
Monte Carlo Methods | 4 |
Accuracy | 3 |
Factor Analysis | 3 |
Sample Size | 3 |
Simulation | 3 |
Comparative Analysis | 2 |
More ▼ |
Author
Allan S. Cohen | 1 |
Arikan, Serkan | 1 |
Aybek, Eren Can | 1 |
Baris Pekmezci, Fulya | 1 |
Berk, Ronald A. | 1 |
Brown, Joel M. | 1 |
Choi, Youn-Jeng | 1 |
Gulleroglu, H. Deniz | 1 |
Guo, Wenjing | 1 |
Kaiser, Henry F. | 1 |
Kose, Ibrahim Alper | 1 |
More ▼ |
Publication Type
Reports - Research | 7 |
Journal Articles | 6 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Guo, Wenjing; Choi, Youn-Jeng – Educational and Psychological Measurement, 2023
Determining the number of dimensions is extremely important in applying item response theory (IRT) models to data. Traditional and revised parallel analyses have been proposed within the factor analysis framework, and both have shown some promise in assessing dimensionality. However, their performance in the IRT framework has not been…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Guidelines
Novak, Josip; Rebernjak, Blaž – Measurement: Interdisciplinary Research and Perspectives, 2023
A Monte Carlo simulation study was conducted to examine the performance of [alpha], [lambda]2, [lambda][subscript 4], [lambda][subscript 2], [omega][subscript T], GLB[subscript MRFA], and GLB[subscript Algebraic] coefficients. Population reliability, distribution shape, sample size, test length, and number of response categories were varied…
Descriptors: Monte Carlo Methods, Evaluation Methods, Reliability, Simulation
Sedat Sen; Allan S. Cohen – Educational and Psychological Measurement, 2024
A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's…
Descriptors: Goodness of Fit, Item Response Theory, Sample Size, Classification
Arikan, Serkan; Aybek, Eren Can – Educational Measurement: Issues and Practice, 2022
Many scholars compared various item discrimination indices in real or simulated data. Item discrimination indices, such as item-total correlation, item-rest correlation, and IRT item discrimination parameter, provide information about individual differences among all participants. However, there are tests that aim to select a very limited number…
Descriptors: Monte Carlo Methods, Item Analysis, Correlation, Individual Differences
Tulek, Onder Kamil; Kose, Ibrahim Alper – Eurasian Journal of Educational Research, 2019
Purpose: This research investigates Tests that include DIF items and which are purified from DIF items. While doing this, the ability estimations and purified DIF items are compared to understand whether there is a correlation between the estimations. Method: The researcher used to R 3.4.1 in order to compare the items and after this situation;…
Descriptors: Test Items, Item Analysis, Item Response Theory, Test Length
Baris Pekmezci, Fulya; Gulleroglu, H. Deniz – Eurasian Journal of Educational Research, 2019
Purpose: This study aims to investigate the orthogonality assumption, which restricts the use of Bifactor item response theory under different conditions. Method: Data of the study have been obtained in accordance with the Bifactor model. It has been produced in accordance with two different models (Model 1 and Model 2) in a simulated way.…
Descriptors: Item Response Theory, Accuracy, Item Analysis, Correlation

Berk, Ronald A. – Educational and Psychological Measurement, 1978
Three formulae developed to correct item-total correlations for spuriousness were evaluated. Relationships among corrected, uncorrected, and item-remainder correlations were determined by computing sets of mean, minimum, and maximum deviation coefficients and Spearman rank correlations for nine test lengths. (Author/JKS)
Descriptors: Correlation, Intermediate Grades, Item Analysis, Test Construction

Serlin, Ronald C.; Kaiser, Henry F. – Educational and Psychological Measurement, 1978
When multiple-choice tests are scored in the usual manner, giving each correct answer one point, information concerning response patterns is lost. A method for utilizing this information is suggested. An example is presented and compared with two conventional methods of scoring. (Author/JKS)
Descriptors: Correlation, Factor Analysis, Item Analysis, Multiple Choice Tests
Brown, Joel M.; Weiss, David J. – 1977
An adaptive testing strategy is described for achievement tests covering multiple content areas. The strategy combines adaptive item selection both within and between the subtests in the multiple-subtest battery. A real-data simulation was conducted to compare the results from adaptive testing and from conventional testing, in terms of test…
Descriptors: Achievement Tests, Adaptive Testing, Branching, Comparative Analysis