Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 1 |
Descriptor
Comparative Analysis | 7 |
Computer Simulation | 7 |
Equated Scores | 7 |
Item Response Theory | 5 |
Mathematical Models | 4 |
Equations (Mathematics) | 3 |
Error of Measurement | 3 |
Sample Size | 3 |
Estimation (Mathematics) | 2 |
Higher Education | 2 |
Scaling | 2 |
More ▼ |
Author
Zeng, Lingjia | 2 |
Cohen, Allan S. | 1 |
Fitzpatrick, Steven J. | 1 |
Hirsch, Thomas M. | 1 |
Hu, Huiqin | 1 |
Kim, Seock-Ho | 1 |
Morrison, Carol A. | 1 |
Rogers, W. Todd | 1 |
Tang, K. Linda | 1 |
Vukmirovic, Zarko | 1 |
Publication Type
Reports - Evaluative | 4 |
Journal Articles | 3 |
Reports - Research | 3 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Hu, Huiqin; Rogers, W. Todd; Vukmirovic, Zarko – Applied Psychological Measurement, 2008
Common items with inconsistent b-parameter estimates may have a serious impact on item response theory (IRT)--based equating results. To find a better way to deal with the outlier common items with inconsistent b-parameters, the current study investigated the comparability of 10 variations of four IRT-based equating methods (i.e., concurrent…
Descriptors: Item Response Theory, Item Analysis, Computer Simulation, Equated Scores

Zeng, Lingjia – Applied Psychological Measurement, 1993
A numerical approach for computing standard errors (SEs) of a linear equating is described in which first partial derivatives of equating functions needed to compute SEs are derived numerically. Numerical and analytical approaches are compared using the Tucker equating method. SEs derived numerically are found indistinguishable from SEs derived…
Descriptors: Comparative Analysis, Computer Simulation, Equated Scores, Equations (Mathematics)
Morrison, Carol A.; Fitzpatrick, Steven J. – 1992
An attempt was made to determine which item response theory (IRT) equating method results in the least amount of equating error or "scale drift" when equating scores across one or more test forms. An internal anchor test design was employed with five different test forms, each consisting of 30 items, 10 in common with the base test and 5…
Descriptors: Comparative Analysis, Computer Simulation, Equated Scores, Error of Measurement
Tang, K. Linda; And Others – 1993
This study compared the performance of the LOGIST and BILOG computer programs on item response theory (IRT) based scaling and equating for the Test of English as a Foreign Language (TOEFL) using real and simulated data and two calibration structures. Applications of IRT for the TOEFL program are based on the three-parameter logistic (3PL) model.…
Descriptors: Comparative Analysis, Computer Simulation, Equated Scores, Estimation (Mathematics)
Zeng, Lingjia – 1991
Large sample standard errors of linear equating for the single-group design are derived without making the normality assumption. Two general methods based on the delta method of M. Kendall and A. Stuart (1977) are described. One method uses the exact partial derivatives, and the other uses numerical derivatives. Simulation using the beta-binomial…
Descriptors: Comparative Analysis, Computer Simulation, Equated Scores, Equations (Mathematics)
Cohen, Allan S.; Kim, Seock-Ho – 1993
Equating tests from different calibrations under item response theory (IRT) requires calculation of the slope and intercept of the appropriate linear transformation. Two methods have been proposed recently for equating graded response items under IRT, a test characteristic curve method and a minimum chi-square method. These two methods are…
Descriptors: Chi Square, Comparative Analysis, Computer Simulation, Equated Scores

Hirsch, Thomas M. – Journal of Educational Measurement, 1989
Equatings were performed on both simulated and real data sets using common-examinee design and two abilities for each examinee. Results indicate that effective equating, as measured by comparability of true scores, is possible with the techniques used in this study. However, the stability of the ability estimates proved unsatisfactory. (TJH)
Descriptors: Academic Ability, College Students, Comparative Analysis, Computer Assisted Testing