NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 17 results Save | Export
Hanson, Bradley A.; Beguin, Anton A. – 1999
Item response theory (IRT) item parameters can be estimated using data from a common item equating design either separately for each form or concurrently across forms. This paper reports the results of a simulation study of separate versus concurrent item parameter estimation. Using simulated data from a test with 60 dichotomous items, 4 factors…
Descriptors: Equated Scores, Estimation (Mathematics), Item Response Theory
Hanson, Bradley A.; Feinstein, Zachary S. – 1995
This paper discusses loglinear models for assessing differential item functioning (DIF). Loglinear and logit models that have been suggested for studying DIF are reviewed, and loglinear formulations of the logit models are given. A polynomial loglinear model for assessing DIF is introduced. Two examples using the polynomial loglinear model for…
Descriptors: Equated Scores, Item Bias, Test Format, Test Items
Butler, Olivia D.; Hanson, Bradley A. – 1997
The effectiveness of smoothing in reducing random errors in equipercentile equating of a short writing assessment with two raters, two prompts, with scores ranging from zero to five was examined. Thirteen methods were examined: no equating, three presmoothing, three postsmoothing, three combination presmoothing and postsmoothing, mean equating,…
Descriptors: Equated Scores, Sample Size, Test Results, Writing Tests
Hanson, Bradley A.; Feinstein, Zachary S. – 1997
Loglinear and logit models that have been suggested for studying differential item functioning (DIF) are reviewed, and loglinear formulations of the logit models are given. A polynomial loglinear model for assessing DIF is introduced that incorporates scores on the matching variable and item responses. The polynomial loglinear model contains far…
Descriptors: Equated Scores, Item Bias, Scores, Test Construction
Peer reviewed Peer reviewed
Hanson, Bradley A. – Applied Measurement in Education, 1996
Determining whether score distributions differ on two or more test forms administered to samples of examinees from a single population is explored using three statistical tests using loglinear models. Examples are presented of applying tests of distribution differences to decide if equating is needed for alternative forms of a test. (SLD)
Descriptors: Equated Scores, Scoring, Statistical Distributions, Test Format
Peer reviewed Peer reviewed
Hanson, Bradley A.; Beguin, Anton A. – Applied Psychological Measurement, 2002
Conducted a simulation study of separate versus concurrent item parameter estimation in common item equating using simulation data from a test with 60 dichotomous items and considering: (1) estimation program; (2) sample size per form; (3) number of common items; and (4) equivalent versus nonequivalent groups. Results are not decisive enough to…
Descriptors: Equated Scores, Estimation (Mathematics), Item Response Theory, Scaling
Peer reviewed Peer reviewed
Wang, Tianyou; Hanson, Bradley A.; Harris, Deborah J. – Applied Psychological Measurement, 2000
Studied whether circular equating could provide an adequate measure of various types of equating error when applied to different equating methods under different equating designs. Analyses and simluations show that circular equating is generally invalid as a criterion to evaluate the adequacy of equating. (SLD)
Descriptors: Criteria, Equated Scores, Error of Measurement, Evaluation Methods
Wang, Tianyou; Hanson, Bradley A.; Harris, Deborah J. – 1998
Equating a test form to itself through a chain of equatings, commonly referred to as circular equating, has been widely used as a criterion to evaluate the adequacy of equating. This paper uses both analytical methods and simulation methods to show that this criterion is in general invalid in serving this purpose. For the random groups design done…
Descriptors: Equated Scores, Evaluation Methods, Heuristics, Sampling
Peer reviewed Peer reviewed
Tsai, Tsung-Hsun; Hanson, Bradley A.; Kolen, Michael J.; Forsyth, Robert A. – Applied Measurement in Education, 2001
Compared bootstrap standard errors of five item response theory (IRT) equating methods for the common-item nonequivalent groups design using test results for 1,493 and 1,793 examinees taking a professional certification test. Results suggest that standard errors of equating less than 0.1 standard deviation units could be obtained with any of the…
Descriptors: Equated Scores, Error of Measurement, Item Response Theory, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Hanson, Bradley A. – Applied Psychological Measurement, 1991
Log-linear model bivariate smoothing and a bivariate smoothing model based on the four-parameter beta binomial model were compared for usefulness in frequency estimation common-item equipercentile equating using two datasets. The performance of smoothed equipercentile methods was also compared to that of linear methods of common-item equating.…
Descriptors: Comparative Analysis, Equated Scores, Equations (Mathematics), Estimation (Mathematics)
Peer reviewed Peer reviewed
Hanson, Bradley A. – Journal of Educational Statistics, 1991
The formula developed by R. Levine (1955) for equating unequally reliable tests is described. The formula can be interpreted as a method of moments estimate of an equating function that results in first order equity of the equated test score under a classical congeneric model. (TJH)
Descriptors: Equated Scores, Equations (Mathematics), Estimation (Mathematics), Mathematical Models
Peer reviewed Peer reviewed
Kim, Jee-Seon; Hanson, Bradley A. – Applied Psychological Measurement, 2002
Presents a characteristic curve procedure for comparing transformations of the item response theory ability scale assuming the multiple-choice model. Illustrates the use of the method with an example equating American College Testing mathematics tests. (SLD)
Descriptors: Ability, Equated Scores, Item Response Theory, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Pommerich, Mary; Hanson, Bradley A.; Harris, Deborah J.; Sconing, James A. – Applied Psychological Measurement, 2004
Educational measurement practitioners are often asked to link scores on tests that are built to different content specifications. The goal in linking distinct tests is often similar to that for equating scores across different forms of the same test: to provide a set of comparable scores across the two measures. Traditional equating methods can be…
Descriptors: Measurement Techniques, Equated Scores, Prediction, College Entrance Examinations
Peer reviewed Peer reviewed
Hanson, Bradley A.; And Others – Applied Psychological Measurement, 1993
The delta method was used to derive standard errors (SES) of the Levine observed score and Levine true score linear test equating methods using data from two test forms. SES derived without the normality assumption and bootstrap SES were very close. The situation with skewed score distributions is also discussed. (SLD)
Descriptors: Equated Scores, Equations (Mathematics), Error of Measurement, Sampling
PDF pending restoration PDF pending restoration
Hanson, Bradley A.; And Others – 1994
This paper compares various methods of smoothed equipercentile equating and linear equating in the random groups equating design. Three presmoothing methods (based on the beta binomial model, four-parameter beta binomial model and a log-linear model) are compared to postsmoothing using cubic splines, linear equating and unsmoothed equipercentile…
Descriptors: Comparative Analysis, Equated Scores, Error of Measurement, Estimation (Mathematics)
Previous Page | Next Page ยป
Pages: 1  |  2