NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 5,476 to 5,490 of 9,547 results Save | Export
Gershon, Richard C. – 1991
The Johnson O'Connor Research Foundation, which produces vocabulary instructional materials for test takers, is in the process of determining the difficulty values of nontechnical words in the English language. To this end, the Foundation writes test items for vocabulary words and tests them in schools. The items are then calibrated using the…
Descriptors: Ability, Difficulty Level, Goodness of Fit, Item Response Theory
Lawrence, Ida M. – 1995
This study examined to what extent, if any, estimates of reliability for a multiple choice test are affected by the presence of large item sets where each set shares common reading material. The purpose of this research was to assess the effect of local item dependence on estimates of reliability for verbal portions of seven forms of the old and…
Descriptors: Estimation (Mathematics), High Schools, Multiple Choice Tests, Reading Tests
Enright, Mary K.; Bejar, Isaac I. – 1989
In this study, the ability of test development staff to predict the difficulty of analogy items was explored. The nature of the item attributes that contributed to test writers' predictions of difficulty as well as actual item difficulty was also investigated. The two expert test writers studied were quite good at predicting item difficulty. Item…
Descriptors: Analogy, Construct Validity, Difficulty Level, Models
Wightman, Lawrence E.; De Champlain, Andre F. – 1994
Two different methods of obtaining three parameter logistic item response theory (IRT) pretest item parameter estimated for the Graduate Management Admissions Testing Program. The first method consisted of calibrating pretest and operational items simultaneously in a LOGIST run, that is a concurrent calibration design. The second approach entailed…
Descriptors: Ability, Comparative Analysis, Estimation (Mathematics), Item Banks
Herman, William E. – 1996
Marks made by students on test item booklets were analyzed as a clue to better understanding of the metacognitive strategies employed during the completion of a 100-question multiple-choice final examination. Test item booklets of 56 undergraduates were scrutinized for the frequency of the following item markings; (1) no markings at all; (2)…
Descriptors: Higher Education, Metacognition, Multiple Choice Tests, Responses
Holweger, Nancy; Weston, Timothy – 1998
This study compares logistic discriminant function analysis for differential item functioning (DIF) with a technique for the detection of DIF that is based on item response theory rather than the Mantel-Haenszel procedure. In this study, the areas between the two item characteristic curves, also called the item characteristic curve method is…
Descriptors: Item Bias, Item Response Theory, Performance Based Assessment, State Programs
Kromrey, Jeffrey D.; Parshall, Cynthia G.; Yi, Qing – 1998
The effects of anchor test characteristics in the accuracy and precision of test equating in the "common items, nonequivalent groups design" were studied. The study also considered the effects of nonparallel based and new forms on the equating solution, and it investigated the effects of differential weighting on the success of equating…
Descriptors: Equated Scores, High Schools, Item Response Theory, Monte Carlo Methods
van der Linden, Wim J.; Veldkamp, Bernard P.; Reese, Lynda M. – 1998
An integer programming approach to item pool design is presented that can be used to calculate an optimal blueprint for an item pool to support an existing testing program. The results are optimal in the sense that they minimize the efforts involved in actually producing the items as revealed by current item writing patterns. Also, an adaptation…
Descriptors: Higher Education, Item Banks, Item Response Theory, Models
Glas, Cees A. W.; Vos, Hans J. – 1998
A version of sequential mastery testing is studied in which response behavior is modeled by an item response theory (IRT) model. First, a general theoretical framework is sketched that is based on a combination of Bayesian sequential decision theory and item response theory. A discussion follows on how IRT based sequential mastery testing can be…
Descriptors: Adaptive Testing, Bayesian Statistics, Item Response Theory, Mastery Tests
Nandakumar, Ratna – 1992
The capability of the DIMTEST statistical test to assess essential dimensionality of the model underlying item responses of real tests as opposed to simulated tests was investigated. A variety of real test data from difference sources was used to assess essential dimensionality. Based on DIMTEST results, some test data are assessed as fitting an…
Descriptors: Ability, Computer Simulation, Evaluation Methods, Item Response Theory
Clauser, Brian E.; And Others – 1991
Item bias has been a major concern for test developers during recent years. The Mantel-Haenszel statistic has been among the preferred methods for identifying biased items. The statistic's performance in identifying uniform bias in simulated data modeled by producing various levels of difference in the (item difficulty) b-parameter for reference…
Descriptors: Comparative Testing, Difficulty Level, Item Bias, Item Response Theory
Boldt, Robert F. – 1983
The project reported here consisted of a sensitivity review of the items of Forms 11, 12, and 13 of the Armed Services Vocational Aptitude Battery (ASVAB). Because administration of this battery is a required step in the accession process, it should be free from perceived bias or offensiveness that could detract from the measurement process. In…
Descriptors: Aptitude Tests, Attitudes, Military Personnel, Opinions
Bart, William M.; Palvia, Rajkumari – 1983
In previous research, no relationship was found between test factor structure and test hierarchical structure. This study found some correspondence between test factor structure and test inter-item dependency structure, as measured by a log-linear model. There was an inconsistency, however, which warrants further study: more significant two-item…
Descriptors: Factor Structure, Interaction, Latent Trait Theory, Mathematical Models
Yen, Wendy M.; Candell, Gregory L. – 1990
Reliabilities are compared for two types of test score data: number correct, and item response patterns. Item-pattern scoring using three-parameter item response theory takes into account how many and which items a student answers correctly. This procedure theoretically results in greater reliability than does number-correct scoring. Empirical…
Descriptors: Elementary Education, Elementary School Students, Item Response Theory, Scores
Ackerman, Terry A. – 1987
Concern has been expressed over the item response theory (IRT) assumption that a person's ability can be estimated in a unidimensional latent space. To examine whether or not the response to an item requires only a single latent ability, unidimensional ability estimates were compared for data generated from the multidimensional item response…
Descriptors: Ability, Computer Simulation, Difficulty Level, Item Analysis
Pages: 1  |  ...  |  362  |  363  |  364  |  365  |  366  |  367  |  368  |  369  |  370  |  ...  |  637