Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 1 |
Descriptor
Author
Cleary, T. Anne | 1 |
Eignor, Daniel R. | 1 |
Fitzpatrick, Steven J. | 1 |
Gilmer, Jerry S. | 1 |
Hedges, Larry V. | 1 |
Hirsch, Thomas M. | 1 |
Hu, Huiqin | 1 |
Hwang, Chi-en | 1 |
Lissitz, Robert W. | 1 |
Morrison, Carol A. | 1 |
Parshall, Cynthia G. | 1 |
More ▼ |
Publication Type
Reports - Research | 9 |
Speeches/Meeting Papers | 4 |
Journal Articles | 2 |
Education Level
Audience
Researchers | 3 |
Location
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Hu, Huiqin; Rogers, W. Todd; Vukmirovic, Zarko – Applied Psychological Measurement, 2008
Common items with inconsistent b-parameter estimates may have a serious impact on item response theory (IRT)--based equating results. To find a better way to deal with the outlier common items with inconsistent b-parameters, the current study investigated the comparability of 10 variations of four IRT-based equating methods (i.e., concurrent…
Descriptors: Item Response Theory, Item Analysis, Computer Simulation, Equated Scores
Hedges, Larry V.; Vevea, Jack L. – 2003
A computer simulation study was conducted to investigate the amount of uncertainty added to National Assessment of Educational Progress estimates by equating error under three different equating methods and while varying a number of factors that might affect accuracy of equating. Data from past NAEP administrations were used to guide the…
Descriptors: Computer Simulation, Equated Scores, Error of Measurement, Item Response Theory
Morrison, Carol A.; Fitzpatrick, Steven J. – 1992
An attempt was made to determine which item response theory (IRT) equating method results in the least amount of equating error or "scale drift" when equating scores across one or more test forms. An internal anchor test design was employed with five different test forms, each consisting of 30 items, 10 in common with the base test and 5…
Descriptors: Comparative Analysis, Computer Simulation, Equated Scores, Error of Measurement
Hwang, Chi-en; Cleary, T. Anne – 1986
The results obtained from two basic types of pre-equatings of tests were compared: the item response theory (IRT) pre-equating and section pre-equating (SPE). The simulated data were generated from a modified three-parameter logistic model with a constant guessing parameter. Responses of two replication samples of 3000 examinees on two 72-item…
Descriptors: Computer Simulation, Equated Scores, Latent Trait Theory, Mathematical Models
Skaggs, Gary; Lissitz, Robert W. – 1985
This study examined how four commonly used test equating procedures (linear, equipercentile, Rasch Model, and three-parameter) would respond to situations in which the properties or the two tests being equated were different. Data for two tests plus an external anchor test were generated from a three parameter model in which mean test differences…
Descriptors: Computer Simulation, Equated Scores, Error of Measurement, Goodness of Fit
Stocking, Martha L.; Eignor, Daniel R. – 1986
In item response theory (IRT), preequating depends upon item parameter estimate invariance. Three separate simulations, all using the unidimensional three-parameter logistic item response model, were conducted to study the impact of the following variables on preequating: (1) mean differences in ability; (2) multidimensionality in the data; and…
Descriptors: College Entrance Examinations, Computer Simulation, Equated Scores, Error of Measurement
Gilmer, Jerry S. – 1987
The proponents of test disclosure argue that disclosure is a matter of fairness; the opponents argue that fairness is enhanced by score equating which is dependent on test security. This research simulated disclosure on a professional licensing examination by placing response keys to selected items in some examinees' records, and comparing their…
Descriptors: Adults, Answer Keys, Computer Simulation, Cutting Scores

Hirsch, Thomas M. – Journal of Educational Measurement, 1989
Equatings were performed on both simulated and real data sets using common-examinee design and two abilities for each examinee. Results indicate that effective equating, as measured by comparability of true scores, is possible with the techniques used in this study. However, the stability of the ability estimates proved unsatisfactory. (TJH)
Descriptors: Academic Ability, College Students, Comparative Analysis, Computer Assisted Testing
Parshall, Cynthia G.; And Others – 1991
A Monte Carlo study was conducted to compare the statistical bias and standard errors of non-equivalent-groups linear test equating in small samples of examinees. One thousand samples of each size (15, 25, 50, and 100) were drawn with replacement from each of five archival data files from elementary school and secondary school teacher subject area…
Descriptors: Computer Simulation, Elementary School Teachers, Elementary Secondary Education, Equated Scores