Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Source
Journal of Educational… | 3 |
Applied Psychological… | 1 |
Educational Leadership | 1 |
Educational Researcher | 1 |
Middle School Journal | 1 |
Author
Davey, Tim | 3 |
Stocking, Martha L. | 3 |
Hambleton, Ronald K. | 2 |
Parshall, Cynthia G. | 2 |
Baxter, Gail P. | 1 |
Boekkooi-Timminga, Ellen | 1 |
Carifio, James | 1 |
Cohen, Michael P. | 1 |
Cui, Ying | 1 |
Dana, Thomas M. | 1 |
De Ayala, R. J. | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 29 |
Speeches/Meeting Papers | 10 |
Journal Articles | 7 |
Collected Works - General | 1 |
Numerical/Quantitative Data | 1 |
Reports - Research | 1 |
Education Level
Audience
Location
California | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Management Admission… | 1 |
National Assessment of… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
de La Torre, Jimmy; Karelitz, Tzur M. – Journal of Educational Measurement, 2009
Compared to unidimensional item response models (IRMs), cognitive diagnostic models (CDMs) based on latent classes represent examinees' knowledge and item requirements using discrete structures. This study systematically examines the viability of retrofitting CDMs to IRM-based data with a linear attribute structure. The study utilizes a procedure…
Descriptors: Simulation, Item Response Theory, Psychometrics, Evaluation Methods
Cui, Ying; Leighton, Jacqueline P. – Journal of Educational Measurement, 2009
In this article, we introduce a person-fit statistic called the hierarchy consistency index (HCI) to help detect misfitting item response vectors for tests developed and analyzed based on a cognitive model. The HCI ranges from -1.0 to 1.0, with values close to -1.0 indicating that students respond unexpectedly or differently from the responses…
Descriptors: Test Length, Simulation, Correlation, Research Methodology

Oshima, T. C. – Journal of Educational Measurement, 1994
The effect of violating the assumption of nonspeededness on ability and item parameter estimates in item response theory was studied through simulation under three speededness conditions. Results indicate that ability estimation was least affected by speededness but that substantial effects on item parameter estimates were found. (SLD)
Descriptors: Ability, Computer Simulation, Estimation (Mathematics), Item Response Theory
Shealy, Robin; Stout, William – 1991
A statistical procedure is presented that is designed to test for unidirectional test bias existing simultaneously in several items of an ability test, based on the assumption that test bias is incipient within the two groups' ability differences. The proposed procedure--Simultaneous Item Bias (SIB)--is based on a multidimensional item response…
Descriptors: Ability, Computer Simulation, Equations (Mathematics), Item Bias
Parshall, Cynthia G.; Davey, Tim; Nering, Mike L. – 1998
When items are selected during a computerized adaptive test (CAT) solely with regard to their measurement properties, it is commonly found that certain items are administered to nearly every examinee, and that a small number of the available items will account for a large proportion of the item administrations. This presents a clear security risk…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Efficiency
Boekkooi-Timminga, Ellen – 1989
The construction of parallel tests from item response theory (IRT) based item banks is discussed. Tests are considered parallel whenever their information functions are identical. After the methods for constructing parallel tests are considered, the computational complexity of 0-1 linear programming and the heuristic procedure applied are…
Descriptors: Heuristics, Item Banks, Latent Trait Theory, Mathematical Models
Davey, Tim; Parshall, Cynthia G. – 1995
Although computerized adaptive tests acquire their efficiency by successively selecting items that provide optimal measurement at each examinee's estimated level of ability, operational testing programs will typically consider additional factors in item selection. In practice, items are generally selected with regard to at least three, often…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Hambleton, Ronald K.; And Others – 1990
Item response theory (IRT) model parameter estimates have considerable merit and open up new directions for test development, but misleading results are often obtained because of errors in the item parameter estimates. The problem of the effects of item parameter estimation errors on the test development process is discussed, and the seriousness…
Descriptors: Error of Measurement, Estimation (Mathematics), Item Response Theory, Sampling
Nandakumar, Ratna – 1989
The theoretical differences between the traditional definition of dimensionality and the more recently defined notion of essential dimensionality are presented. Monte Carlo simulations are used to demonstrate the utility of W. F. Stout's procedure to assess the essential unidimensionality of the latent space underlying a set of terms. The…
Descriptors: Definitions, Educational Assessment, Latent Trait Theory, Mathematical Models
Yen, Wendy M. – 1982
Test scores that are not perfectly reliable cannot be strictly equated unless they are strictly parallel. This fact implies that tau equivalence can be lost if an equipercentile equating is applied to observed scores that are not strictly parallel. Thirty-six simulated data sets are produced to simulate equating tests with different difficulties…
Descriptors: Difficulty Level, Equated Scores, Latent Trait Theory, Methods
Carifio, James; And Others – 1990
Possible bias due to sampling problems or low response rates has been a troubling "nuisance" variable in empirical research since seminal and classical studies were done on these problems at the beginning of this century. Recent research suggests that: (1) earlier views of the alleged bias problem were misleading; (2) under a variety of fairly…
Descriptors: Data Collection, Evaluation Methods, Research Problems, Response Rates (Questionnaires)
Potenza, Maria T.; Stocking, Martha L. – 1994
A multiple choice test item is identified as flawed if it has no single best answer. In spite of extensive quality control procedures, the administration of flawed items to test-takers is inevitable. Common strategies for dealing with flawed items in conventional testing, grounded in the principle of fairness to test-takers, are reexamined in the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Multiple Choice Tests, Scoring
Stocking, Martha L. – 1996
The interest in the application of large-scale computerized adaptive testing has served to focus attention on issues that arise when theoretical advances are made operational. Some of these issues stem less from changes in testing conditions and more from changes in testing paradigms. One such issue is that of the order in which questions are…
Descriptors: Adaptive Testing, Cognitive Processes, Comparative Analysis, Computer Assisted Testing
Hambleton, Ronald K.; Jones, Russell W. – 1993
Errors in item parameter estimates have a negative impact on the accuracy of item and test information functions. The estimation errors may be random, but because items with higher levels of discriminating power are more likely to be selected for a test, and these items are most apt to contain positive errors, the result is that item information…
Descriptors: Computer Simulation, Error of Measurement, Estimation (Mathematics), Item Banks
Kogut, Jan – 1987
In this paper, the detection of response patterns aberrant from the Rasch model is considered. For this purpose, a new person fit index, recently developed by I. W. Molenaar (1987) and an iterative estimation procedure are used in a simulation study of Rasch model data mixed with aberrant data. Three kinds of aberrant response behavior are…
Descriptors: Computer Assisted Testing, Computer Simulation, Difficulty Level, Estimation (Mathematics)
Previous Page | Next Page ยป
Pages: 1 | 2