Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Comparative Analysis | 24 |
Computer Simulation | 24 |
Item Response Theory | 24 |
Estimation (Mathematics) | 12 |
Mathematical Models | 10 |
Equations (Mathematics) | 8 |
Sample Size | 7 |
Item Bias | 6 |
Equated Scores | 5 |
Computer Assisted Testing | 4 |
Error of Measurement | 4 |
More ▼ |
Source
Applied Psychological… | 7 |
Journal of Educational… | 5 |
Applied Measurement in… | 2 |
Australian Journal of… | 1 |
Journal of Educational… | 1 |
Multivariate Behavioral… | 1 |
Psychological Review | 1 |
Author
Publication Type
Journal Articles | 18 |
Reports - Evaluative | 14 |
Reports - Research | 9 |
Speeches/Meeting Papers | 5 |
Opinion Papers | 1 |
Education Level
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Scoular, Claire; Eleftheriadou, Sofia; Ramalingam, Dara; Cloney, Dan – Australian Journal of Education, 2020
Collaboration is a complex skill, comprised of multiple subskills, that is of growing interest to policy makers, educators and researchers. Several definitions and frameworks have been described in the literature to support assessment of collaboration; however, the inherent structure of the construct still needs better definition. In 2015, the…
Descriptors: Cooperative Learning, Problem Solving, Computer Assisted Testing, Comparative Analysis
MacCoun, Robert J. – Psychological Review, 2012
[Correction Notice: An erratum for this article was reported in Vol 119(2) of Psychological Review (see record 2012-06153-001). In the article, incorrect versions of figures 3 and 6 were included. Also, Table 8 should have included the following information in the table footnote "P(A V) = probability of acquittal given unanimous verdict." All…
Descriptors: Social Influences, Probability, Item Response Theory, Psychological Studies
Hu, Huiqin; Rogers, W. Todd; Vukmirovic, Zarko – Applied Psychological Measurement, 2008
Common items with inconsistent b-parameter estimates may have a serious impact on item response theory (IRT)--based equating results. To find a better way to deal with the outlier common items with inconsistent b-parameters, the current study investigated the comparability of 10 variations of four IRT-based equating methods (i.e., concurrent…
Descriptors: Item Response Theory, Item Analysis, Computer Simulation, Equated Scores

de Gruijter, Dato N. M. – Applied Psychological Measurement, 1994
The nonparametric Mokken model of test data was compared with parametric models using simulated data through latent class analysis. It is demonstrated that latent class analysis provides a consistent comparison of item response models. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Item Response Theory, Nonparametric Statistics

Tate, Richard L. – Journal of Educational Measurement, 1995
Robustness of the school-level item response theoretic (IRT) model to violations of distributional assumptions was studied in a computer simulation. In situations where school-level precision might be acceptable for real school comparisons, expected a posteriori estimates of school ability were robust over a range of violations and conditions.…
Descriptors: Comparative Analysis, Computer Simulation, Estimation (Mathematics), Item Response Theory

Zwinderman, Aeilko; van den Wollenberg, Arnold L. – Applied Psychological Measurement, 1990
Simulation studies (N=4,000 simulees) examined the effect of misspecification of the latent ability distribution (theta) on the accuracy and efficiency of marginal maximum likelihood (MML) item parameter estimates and on MML statistics to test sufficiency and conditional independence. Results were compared to those of the conditional maximum…
Descriptors: Comparative Analysis, Computer Simulation, Estimation (Mathematics), Item Response Theory

Stark, Stephen; Drasgow, Fritz – Applied Psychological Measurement, 2002
Describes item response and information functions for the Zinnes and Griggs paired comparison item response theory (IRT) model (1974) and presents procedures for estimating stimulus and person parameters. Monte Carlo simulations show that at least 400 ratings are required to obtain reasonably accurate estimates of the stimulus parameters and their…
Descriptors: Comparative Analysis, Computer Simulation, Error of Measurement, Item Response Theory

Knol, Dirk L.; Berger, Martijn P. F. – Multivariate Behavioral Research, 1991
In a simulation study, factor analysis and multidimensional item response theory (IRT) models are compared with respect to estimates of item parameters. For multidimensional data, a common factor analysis on the matrix of tetrachoric correlations performs at least as well as the multidimensional IRT model. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Equations (Mathematics), Estimation (Mathematics)

Liou, Michelle – Applied Psychological Measurement, 1993
Accuracy of three exact person tests for assessing model-data fit in the Rasch model was investigated in a simulation study. Empirical Type I error rates and statistical power of the person tests were computed. The exact person test conditioned on total score is a promising tool for assessing consistency of response patterns with the Rasch model.…
Descriptors: Comparative Analysis, Computer Simulation, Equations (Mathematics), Goodness of Fit
Liu, Xiufeng – 1992
The difference between compensatory and non-compensatory item response theory (IRT) models in terms of the dimensionality of test data generated by them, and its effect on the model-data-fit were examined. The STRESS (proportion of variance not accounted for by the multidimensional scaling model) and RSQ (proportion of variance accounted for by…
Descriptors: Chi Square, Comparative Analysis, Computer Simulation, Foreign Countries

Barnes, Laura L. B.; Wise, Steven L. – Applied Measurement in Education, 1991
One-parameter and three-parameter item response theory (IRT) model estimates were compared with estimates obtained from two modified one-parameter models that incorporated a constant nonzero guessing parameter. Using small-sample simulation data (50, 100, and 200 simulated examinees), modified 1-parameter models were most effective in estimating…
Descriptors: Ability, Achievement Tests, Comparative Analysis, Computer Simulation
Morrison, Carol A.; Fitzpatrick, Steven J. – 1992
An attempt was made to determine which item response theory (IRT) equating method results in the least amount of equating error or "scale drift" when equating scores across one or more test forms. An internal anchor test design was employed with five different test forms, each consisting of 30 items, 10 in common with the base test and 5…
Descriptors: Comparative Analysis, Computer Simulation, Equated Scores, Error of Measurement
Tang, K. Linda; And Others – 1993
This study compared the performance of the LOGIST and BILOG computer programs on item response theory (IRT) based scaling and equating for the Test of English as a Foreign Language (TOEFL) using real and simulated data and two calibration structures. Applications of IRT for the TOEFL program are based on the three-parameter logistic (3PL) model.…
Descriptors: Comparative Analysis, Computer Simulation, Equated Scores, Estimation (Mathematics)

Swaminathan, Hariharan; Rogers, H. Jane – Journal of Educational Measurement, 1990
A logistic regression model for characterizing differential item functioning (DIF) between two groups is presented. A distinction is drawn between uniform and nonuniform DIF in terms of model parameters. A statistic for testing the hypotheses of no DIF is developed, and simulation studies compare it with the Mantel-Haenszel procedure. (Author/TJH)
Descriptors: Comparative Analysis, Computer Simulation, Equations (Mathematics), Estimation (Mathematics)

De Ayala, R. J.; And Others – Journal of Educational Measurement, 1990
F. M. Lord's flexilevel, computerized adaptive testing (CAT) procedure was compared to an item-response theory-based CAT procedure that uses Bayesian ability estimation with various standard errors of estimates used for terminating the test. Ability estimates of flexilevel CATs were as accurate as were those of Bayesian CATs. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Bayesian Statistics, Comparative Analysis
Previous Page | Next Page ยป
Pages: 1 | 2