Descriptor
Item Analysis | 22 |
Mathematical Models | 22 |
Multiple Choice Tests | 22 |
Latent Trait Theory | 11 |
Test Items | 11 |
Guessing (Tests) | 7 |
Test Construction | 6 |
Estimation (Mathematics) | 5 |
Goodness of Fit | 5 |
Probability | 5 |
Response Style (Tests) | 5 |
More ▼ |
Author
Samejima, Fumiko | 3 |
Abrahamowicz, Michal | 1 |
Bock, R. Darrell | 1 |
Bradshaw, Charles W., Jr. | 1 |
Carlson, James E. | 1 |
Choppin, Bruce | 1 |
Dawis, Rene V. | 1 |
Drasgow, Fritz | 1 |
Foreman, Dale I. | 1 |
Frary, Robert B. | 1 |
Giles, Mary B. | 1 |
More ▼ |
Publication Type
Reports - Research | 16 |
Speeches/Meeting Papers | 7 |
Journal Articles | 3 |
Numerical/Quantitative Data | 2 |
Guides - General | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Researchers | 4 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Armed Services Vocational… | 1 |
Comprehensive Tests of Basic… | 1 |
Iowa Tests of Basic Skills | 1 |
What Works Clearinghouse Rating

Melzer, Charles W.; And Others – Educational and Psychological Measurement, 1981
The magnitude of statistical bias for the phi-coefficient was investigated, using computer simulated examinations in which all the students had equal knowledge. Several modifications of phi were tested, but when applied to real examinations, none succeeded in improving its reproducibility when items are re-used on equivalent student groups.…
Descriptors: Correlation, Item Analysis, Mathematical Models, Multiple Choice Tests

Kane, Michael; Moloney, James – Applied Psychological Measurement, 1978
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests

Veale, James R.; Foreman, Dale I. – Journal of Educational Measurement, 1983
Statistical procedures for measuring heterogeneity of test item distractor distributions, or cultural variation, are presented. These procedures are based on the notion that examinees' responses to the incorrect options of a multiple-choice test provide more information concerning cultural bias than their correct responses. (Author/PN)
Descriptors: Ethnic Bias, Item Analysis, Mathematical Models, Multiple Choice Tests
Bradshaw, Charles W., Jr. – 1968
A method for determining invariant item parameters is presented, along with a scheme for obtaining test scores which are interpretable in terms of a common metric. The method assumes a unidimensional latent trait and uses a three parameter normal ogive model. The assumptions of the model are explored, and the methods for calculating the proposed…
Descriptors: Equated Scores, Item Analysis, Latent Trait Theory, Mathematical Models
Samejima, Fumiko – 1984
In order to evaluate our methods and approaches of estimating the operating characteristics of discrete item responses, it is necessary to try other comparable methods on similar sets of data. LOGIST 5 was taken up for this reason, and was tried upon the hypothetical test items, which follow the normal ogive model and were used frequently in…
Descriptors: Computer Simulation, Computer Software, Estimation (Mathematics), Item Analysis
Choppin, Bruce – 1982
On well-constructed multiple-choice tests, the most serious threat to measurement is not variation in item discrimination, but the guessing behavior that may be adopted by some students. Ways of ameliorating the effects of guessing are discussed, especially for problems in latent trait models. A new item response model, including an item parameter…
Descriptors: Ability, Algorithms, Guessing (Tests), Item Analysis

Bock, R. Darrell – Psychometrika, 1972
Descriptors: Ability Identification, Comparative Analysis, Item Analysis, Mathematical Models
Ryan, Joseph P.; Hamm, Debra W. – 1976
A procedure is described for increasing the reliability of tests after they have been given and for developing shorter but more reliable tests. Eight tests administered to 200 graduate students studying educational research are analyzed. The analysis considers the original tests, the items loading on the first factor of the test, and the items…
Descriptors: Career Development, Factor Analysis, Factor Structure, Item Analysis
Samejima, Fumiko – 1984
Simple sum procedure of the conditional PDF approach (plausiblity of distractor function) combined with the normal approach method was applied for estimating the plausibility functions of the distractors of the Level II vocabulary subtest items of the Iowa Tests of Basic Skills. In so doing, the normal ogive model was adopted for the correct…
Descriptors: Adaptive Testing, Elementary Secondary Education, Estimation (Mathematics), Item Analysis

Kane, Michael T.; Moloney, James M. – 1976
The Answer-Until-Correct (AUC) procedure has been proposed in order to increase the reliability of multiple-choice items. A model for examinees' behavior when they must respond to each item until they answer it correctly is presented. An expression for the reliability of AUC items, as a function of the characteristics of the item and the scoring…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Kingsbury, G. Gage – 1985
A procedure for assessing content-area and total-test dimensionality which uses response function discrepancies (RFD) was studied. Three different versions of the RFD procedure were compared to Bejar's principal axis content-area procedure and Indow and Samejima's exploratory factor analytic technique. The procedures were compared in terms of the…
Descriptors: Achievement Tests, Comparative Analysis, Elementary Education, Estimation (Mathematics)
Drasgow, Fritz; And Others – 1987
This paper addresses the information revealed in incorrect option selection on multiple choice items. Multilinear Formula Scoring (MFS), a theory providing methods for solving psychological measurement problems of long standing, is first used to estimate option characteristic curves for the Armed Services Vocational Aptitude Battery Arithmetic…
Descriptors: Aptitude Tests, Item Analysis, Latent Trait Theory, Mathematical Models
Samejima, Fumiko – 1986
Item analysis data fitting the normal ogive model were simulated in order to investigate the problems encountered when applying the three-parameter logistic model. Binary item tests containing 10 and 35 items were created, and Monte Carlo methods simulated the responses of 2,000 and 500 examinees. Item parameters were obtained using Logist 5.…
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Item Analysis
Tinsley, Howard E. A.; Dawis, Rene V. – 1972
Selection of items for analogy tests according to the Rasch item probability of "goodness of fit" to the model is compared with three commonly used item selection criteria: item discrimination, item difficulty, and item-ability correlation. Word, picture, symbol and number analogies in multiple choice format were administered to several…
Descriptors: College Students, Correlation, Evaluation Criteria, Goodness of Fit

Abrahamowicz, Michal; Ramsay, James O. – Psychometrika, 1992
A nonparametric multicategorical model for multiple-choice data is proposed as an extension of the binary spline model of J. O. Ramsay and M. Abrahamowicz (1989). Results of two Monte Carlo studies illustrate the model, which approximates probability functions by rational splines. (SLD)
Descriptors: Equations (Mathematics), Estimation (Mathematics), Graphs, Item Analysis
Previous Page | Next Page ยป
Pages: 1 | 2