NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Donoghue, John R.; Allen, Nancy L. – 1991
This Monte Carlo study examined strategies for forming the matching variable for the Mantel-Haenszel (MH) differential item functioning (DIF) procedure. Data were generated using a three-parameter logistic item response theory model, with common guessing parameters. The number of subjects and test length were manipulated, as were the difficulty,…
Descriptors: Comparative Analysis, Difficulty Level, Equations (Mathematics), Item Bias
Peer reviewed Peer reviewed
Jansen, Margo G. H. – Journal of Educational Statistics, 1986
In this paper a Bayesian procedure is developed for the simultaneous estimation of the reading ability and difficulty parameters which are assumed to be factors in reading errors by the multiplicative Poisson Model. According to several criteria, the Bayesian estimates are better than comparable maximum likelihood estimates. (Author/JAZ)
Descriptors: Achievement Tests, Bayesian Statistics, Comparative Analysis, Difficulty Level
Peer reviewed Peer reviewed
Huck, Schuyler W.; And Others – Educational and Psychological Measurement, 1981
Believing that examinee-by-item interaction should be conceptualized as true score variability rather than as a result of errors of measurement, Lu proposed a modification of Hoyt's analysis of variance reliability procedure. Via a computer simulation study, it is shown that Lu's approach does not separate interaction from error. (Author/RL)
Descriptors: Analysis of Variance, Comparative Analysis, Computer Programs, Difficulty Level
Kromrey, Jeffrey D.; Bacon, Tina P. – 1992
A Monte Carlo study was conducted to estimate the small sample standard errors and statistical bias of psychometric statistics commonly used in the analysis of achievement tests. The statistics examined in this research were: (1) the index of item difficulty; (2) the index of item discrimination; (3) the corrected item-total point-biserial…
Descriptors: Achievement Tests, Comparative Analysis, Difficulty Level, Estimation (Mathematics)
Jones, Patricia B.; And Others – 1987
In order to determine the effectiveness of multidimensional scaling (MDS) in recovering the dimensionality of a set of dichotomously-scored items, data were simulated in one, two, and three dimensions for a variety of correlations with the underlying latent trait. Similarity matrices were constructed from these data using three margin-sensitive…
Descriptors: Cluster Analysis, Correlation, Difficulty Level, Error of Measurement
Samejima, Fumiko – 1986
Item analysis data fitting the normal ogive model were simulated in order to investigate the problems encountered when applying the three-parameter logistic model. Binary item tests containing 10 and 35 items were created, and Monte Carlo methods simulated the responses of 2,000 and 500 examinees. Item parameters were obtained using Logist 5.…
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Item Analysis
Owston, Ronald D. – 1979
The development of a probabilistic model for validating Gange's learning hierarchies is described. Learning hierarchies are defined as paired networks of intellectual tasks arranged so that a substantial amount of positive transfer occurs from tasks in a lower position to connected ones in a higher position. This probabilistic validation technique…
Descriptors: Associative Learning, Classification, Difficulty Level, Mathematical Models
Tucker, Ledyard R.; And Others – 1986
A Monte Carlo study of five indices of dimensionality of binary items used a computer model that allowed sampling of both items and people. Five parameters were systematically varied in a factorial design: (1) number of common factors from one to five; (2) number of items, including 20, 30, 40, and 60; (3) sample sizes of 125 and 500; (4) nearly…
Descriptors: Correlation, Difficulty Level, Educational Research, Expectancy Tables
Kirisci, Levent; Hsu, Tse-Chi – 1992
A predictive adaptive testing (PAT) strategy was developed based on statistical predictive analysis, and its feasibility was studied by comparing PAT performance to those of the Flexilevel, Bayesian modal, and expected a posteriori (EAP) strategies in a simulated environment. The proposed adaptive test is based on the idea of using item difficulty…
Descriptors: Adaptive Testing, Bayesian Statistics, Comparative Analysis, Computer Assisted Testing
Patience, Wayne M.; Reckase, Mark D. – 1979
An experiment was performed with computer-generated data to investigate some of the operational characteristics of tailored testing as they are related to various provisions of the computer program and item pool. With respect to the computer program, two characteristics were varied: the size of the step of increase or decrease in item difficulty…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Error of Measurement
Carlson, James E.; Spray, Judith A. – 1986
This paper discussed methods currently under study for use with multiple-response data. Besides using Bonferroni inequality methods to control type one error rate over a set of inferences involving multiple response data, a recently proposed methodology of plotting the p-values resulting from multiple significance tests was explored. Proficiency…
Descriptors: Cutting Scores, Data Analysis, Difficulty Level, Error of Measurement