NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)7
Audience
Laws, Policies, & Programs
Assessments and Surveys
Program for International…1
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Singh, Housila P.; Tarray, Tanveer A. – Sociological Methods & Research, 2015
In this article, we have suggested a new modified mixed randomized response (RR) model and studied its properties. It is shown that the proposed mixed RR model is always more efficient than the Kim and Warde's mixed RR model. The proposed mixed RR model has also been extended to stratified sampling. Numerical illustrations and graphical…
Descriptors: Item Response Theory, Models, Efficiency, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Shu, Lianghua; Schwarz, Richard D. – Journal of Educational Measurement, 2014
As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…
Descriptors: Item Response Theory, Reliability, Models, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim – Educational and Psychological Measurement, 2014
Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…
Descriptors: Item Response Theory, Comparative Analysis, Test Items, Equated Scores
Kang, Taehoon; Petersen, Nancy S. – ACT, Inc., 2009
This paper compares three methods of item calibration--concurrent calibration, separate calibration with linking, and fixed item parameter calibration--that are frequently used for linking item parameters to a base scale. Concurrent and separate calibrations were implemented using BILOG-MG. The Stocking and Lord (1983) characteristic curve method…
Descriptors: Standards, Testing Programs, Test Items, Statistical Distributions
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Tianyou; Lee, Won-Chan; Brennan, Robert L.; Kolen, Michael J. – Applied Psychological Measurement, 2008
This article uses simulation to compare two test equating methods under the common-item nonequivalent groups design: the frequency estimation method and the chained equipercentile method. An item response theory model is used to define the true equating criterion, simulate group differences, and generate response data. Three linear equating…
Descriptors: Equated Scores, Item Response Theory, Simulation, Comparative Analysis
von Davier, Matthias; Xu, Xueli; Carstensen, Claus H. – Educational Testing Service, 2009
A general diagnostic model was used to specify and compare two multidimensional item-response-theory (MIRT) models for longitudinal data: (a) a model that handles repeated measurements as multiple, correlated variables over time (Andersen, 1985) and (b) a model that assumes one common variable over time and additional orthogonal variables that…
Descriptors: Models, Item Response Theory, Longitudinal Studies, Measurement
Garner, Mary; Engelhard, George, Jr. – 1997
This paper considers the following questions: (1) what is the relationship between the method of paired comparisons and Rasch measurement theory? (2) what is the relationship between the method of paired comparisons and graph theory? and (3) what can graph theory contribute to the understanding of Rasch measurement theory? It is specifically shown…
Descriptors: Comparative Analysis, Estimation (Mathematics), Graphs, Item Response Theory
Peer reviewed Peer reviewed
Tate, Richard L. – Journal of Educational Measurement, 1995
Robustness of the school-level item response theoretic (IRT) model to violations of distributional assumptions was studied in a computer simulation. In situations where school-level precision might be acceptable for real school comparisons, expected a posteriori estimates of school ability were robust over a range of violations and conditions.…
Descriptors: Comparative Analysis, Computer Simulation, Estimation (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
Noonan, Brian W.; And Others – Applied Psychological Measurement, 1992
Studied the extent to which three appropriateness indexes, Z(sub 3), ECIZ4, and W, are well standardized in a Monte Carlo study. The ECIZ4 most closely approximated a normal distribution, and its skewness and kurtosis were more stable and less affected by test length and item response theory model than the others. (SLD)
Descriptors: Comparative Analysis, Item Response Theory, Mathematical Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Swaminathan, Hariharan; Rogers, H. Jane – Journal of Educational Measurement, 1990
A logistic regression model for characterizing differential item functioning (DIF) between two groups is presented. A distinction is drawn between uniform and nonuniform DIF in terms of model parameters. A statistic for testing the hypotheses of no DIF is developed, and simulation studies compare it with the Mantel-Haenszel procedure. (Author/TJH)
Descriptors: Comparative Analysis, Computer Simulation, Equations (Mathematics), Estimation (Mathematics)
Peer reviewed Peer reviewed
Hankins, Janette A. – Educational and Psychological Measurement, 1990
The effects of a fixed and variable entry procedure on bias and information of a Bayesian adaptive test were compared. Neither procedure produced biased ability estimates on the average. Bias at the distribution extremes, efficiency curves, item subsets generated for administration, and items required to reach termination are discussed. (TJH)
Descriptors: Adaptive Testing, Aptitude Tests, Bayesian Statistics, Comparative Analysis
Peer reviewed Peer reviewed
Tan, E. S.; And Others – Journal of Educational Measurement, 1994
A study of the relationship between first-year results for 115 Dutch medical students and achievement during medical school was studied using an item-response theory model for the longitudinal measure of change with stochastic parameters (developed by Albers et al., 1989) indicates that a low rate of growth in the first year persists. (SLD)
Descriptors: Academic Achievement, Change, Comparative Analysis, Foreign Countries
Kim, Seock-Ho; And Others – 1992
Hierarchical Bayes procedures were compared for estimating item and ability parameters in item response theory. Simulated data sets from the two-parameter logistic model were analyzed using three different hierarchical Bayes procedures: (1) the joint Bayesian with known hyperparameters (JB1); (2) the joint Bayesian with information hyperpriors…
Descriptors: Ability, Bayesian Statistics, Comparative Analysis, Equations (Mathematics)
Peer reviewed Peer reviewed
Camilli, Gregory – Applied Psychological Measurement, 1992
A mathematical model is proposed to describe how group differences in distributions of abilities, which are distinct from the target ability, influence the probability of a correct item response. In the multidimensional approach, differential item functioning is considered a function of the educational histories of the examinees. (SLD)
Descriptors: Ability, Comparative Analysis, Equations (Mathematics), Factor Analysis