NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian – Applied Psychological Measurement, 2010
A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…
Descriptors: Item Response Theory, Measurement, Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sangwon; Kim, Seock-Ho; Kamphaus, Randy W. – School Psychology Quarterly, 2010
Gender differences in aggression have typically been based on studies utilizing a mean difference method. From a measurement perspective, this method is inherently problematic unless an aggression measure possesses comparable validity across gender. Stated differently, establishing measurement invariance on the measure of aggression is…
Descriptors: Test Items, Females, Factor Analysis, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Seock-Ho; Cohen, Allan S.; Alagoz, Cigdem; Kim, Sukwoo – Journal of Educational Measurement, 2007
Data from a large-scale performance assessment (N = 105,731) were analyzed with five differential item functioning (DIF) detection methods for polytomous items to examine the congruence among the DIF detection methods. Two different versions of the item response theory (IRT) model-based likelihood ratio test, the logistic regression likelihood…
Descriptors: Performance Based Assessment, Performance Tests, Item Response Theory, Test Bias
Kim, Seock-Ho; Cohen, Allan S.; DiStefano, Christine A.; Kim, Sooyeon – 1998
Type I error rates of the likelihood ratio test for the detection of differential item functioning (DIF) in the partial credit model were investigated using simulated data. The partial credit model with four ordered performance levels was used to generate data sets of a 30-item test for samples of 300 and 1,000 simulated examinees. Three different…
Descriptors: Item Bias, Simulation, Test Items
De Ayala, R. J.; Kim, Seock-Ho; Stapleton, Laura M.; Dayton, C. Mitchell – 1999
Differential item functioning (DIF) may be defined as an item that displays different statistical properties for different groups after the groups are matched on an ability measure. For instance, with binary data, DIF exists when there is a difference in the conditional probabilities of a correct response for two manifest groups. This paper…
Descriptors: Item Bias, Monte Carlo Methods, Test Items
Peer reviewed Peer reviewed
Kim, Seock-Ho; Cohen, Allan S. – Applied Psychological Measurement, 1998
Compared three methods for developing a common metric under item response theory through simulation. For smaller numbers of common items, linking using the characteristic curve method yielded smaller root mean square differences for both item discrimination and difficulty parameters. For larger numbers of common items, the three methods were…
Descriptors: Comparative Analysis, Difficulty Level, Item Response Theory, Simulation
Cohen, Allan S.; Kim, Seock-Ho; Wollack, James A. – 1998
This paper provides a review of procedures for detection of differential item functioning (DIF) for item response theory (IRT) and observed score methods for the graded response model. In addition, data from a test anxiety scale were analyzed to examine the congruence among these procedures. Data from Nasser, Takahashi, and Benson (1997) were…
Descriptors: Identification, Item Bias, Item Response Theory, Scores
Kim, Seock-Ho – 2002
Continuation ratio logits are used to model the possibilities of obtaining ordered categories in a polytomously scored item. This model is an alternative to other models for ordered category items such as the graded response model and the generalized partial credit model. The discussion includes a theoretical development of the model, a…
Descriptors: Ability, Classification, Item Response Theory, Mathematical Models
Peer reviewed Peer reviewed
Kim, Seock-Ho; Cohen, Allan S. – Applied Psychological Measurement, 1998
Investigated Type I error rates of the likelihood-ratio test for the detection of differential item functioning (DIF) using Monte Carlo simulations under the graded-response model. Type I error rates were within theoretically expected values for all six combinations of sample sizes and ability-matching conditions at each of the nominal alpha…
Descriptors: Ability, Item Bias, Item Response Theory, Monte Carlo Methods
Kim, Seock-Ho – 1997
Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item parameters. Simulated data sets were analyzed using two different Bayes estimation procedures, the two-stage hierarchical Bayes estimation (HB2) and the marginal Bayesian with known hyperparameters (MB), and marginal maximum…
Descriptors: Bayesian Statistics, Difficulty Level, Estimation (Mathematics), Item Bias
Kim, Seock-Ho – 2000
This paper is concerned with statistical issues in differential item functioning (DIF). Four subsets of large scale performance assessment data from the Georgia Kindergarten Assessment Program-Revised (N=105,731; N=10,000; N=1,00; and N=100) were analyzed using three DIF detection methods for polytomous items to examine the congruence among the…
Descriptors: Item Bias, Item Response Theory, Kindergarten, Performance Based Assessment
Peer reviewed Peer reviewed
Kim, Seock-Ho; And Others – Psychometrika, 1994
Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item and ability parameters through two joint and two marginal Bayesian procedures. Marginal procedures yielded smaller root mean square differences for item and ability, but results for larger sample size and test length were similar.…
Descriptors: Ability, Bayesian Statistics, Computer Simulation, Estimation (Mathematics)
Peer reviewed Peer reviewed
Kim, Seock-Ho; And Others – Journal of Educational Measurement, 1995
A method is presented for detection of differential item functioning in multiple groups. This method is closely related to F. M. Lord's chi square for comparing vectors of item parameters estimated in two groups. An example is provided using data from 600 college students taking a mathematics test with and without calculators. (SLD)
Descriptors: Chi Square, College Students, Comparative Analysis, Estimation (Mathematics)
Kim, Seock-Ho; Cohen, Allan S. – 1997
Type I error rates of the likelihood ratio test for the detection of differential item functioning (DIF) were investigated using Monte Carlo simulations. The graded response model with five ordered categories was used to generate data sets of a 30-item test for samples of 300 and 1,000 simulated examinees. All DIF comparisons were simulated by…
Descriptors: Ability, Classification, Computer Simulation, Estimation (Mathematics)
Peer reviewed Peer reviewed
Kim, Seock-Ho; Cohen, Allan S. – Applied Measurement in Education, 1995
Three procedures for the detection of differential item functioning under item response theory were compared. Data for 2 forms of a mathematics test taken by 1,490 college students were analyzed through F. M. Lord's chi-square, N. S. Raju's area measures, and the likelihood ratio test. (SLD)
Descriptors: Chi Square, College Students, Comparative Analysis, Higher Education
Previous Page | Next Page ยป
Pages: 1  |  2