Descriptor
Statistical Studies | 12 |
Test Items | 12 |
Test Theory | 12 |
Mathematical Models | 10 |
Latent Trait Theory | 9 |
Correlation | 5 |
Test Construction | 5 |
Difficulty Level | 4 |
Estimation (Mathematics) | 4 |
Item Analysis | 4 |
Statistical Analysis | 4 |
More ▼ |
Author
Publication Type
Reports - Research | 11 |
Speeches/Meeting Papers | 4 |
Journal Articles | 3 |
Collected Works - Proceedings | 1 |
Education Level
Audience
Researchers | 6 |
Location
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 2 |
SAT (College Admission Test) | 1 |
Sixteen Personality Factor… | 1 |
What Works Clearinghouse Rating

Jannarone, Robert J. – Psychometrika, 1986
Conjunctive item response models are introduced such that: (1) sufficient statistics for latent traits are not necessarily additive in item scores; (2) items are not necessarily locally independent; and (3) existing compensatory (additive) item response models including the binomial, Rasch, logistic, and general locally independent model are…
Descriptors: Cognitive Processes, Hypothesis Testing, Latent Trait Theory, Mathematical Models
Ackerman, Terry A. – 1987
One of the important underlying assumptions of all item response theory (IRT) models is that of local independence. This assumption requires that the response to an item on a test not be influenced by the response to any other items. This assumption is often taken for granted, with little or no scrutiny of the response process required to answer…
Descriptors: Computer Software, Correlation, Estimation (Mathematics), Latent Trait Theory
McKinley, Robert L.; Reckase, Mark D. – 1984
To assess the effects of correlated abilities on test characteristics, and to explore the effects of correlated abilities on the use of a multidimensional item response theory model which does not explicitly account for such a correlation, two tests were constructed. One had two relatively unidimensional subsets of items, the other had all…
Descriptors: Ability, Correlation, Factor Structure, Item Analysis

Cattell, Raymond B.; Krug, Samuel E. – Educational and Psychological Measurement, 1986
Critics have occasionally asserted that the number of factors in the 16PF tests is too large. This study discusses factor-analytic methodology and reviews more than 50 studies in the field. It concludes that the number of important primaries encapsulated in the series is no fewer than the stated number. (Author/JAZ)
Descriptors: Correlation, Cross Cultural Studies, Factor Analysis, Maximum Likelihood Statistics

Harrison, David A. – Journal of Educational Statistics, 1986
Multidimensional item response data were created. The strength of a general factor, the number of common factors, the distribution of items loadingon common factors, and the number of items in simulated tests were manipulated. LOGIST effectively recovered both item and trait parameters in nearly all of the experimental conditions. (Author/JAZ)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Correlation
Livingston, Samuel A. – 1986
This paper deals with test fairness regarding a test consisting of two parts: (1) a "common" section, taken by all students; and (2) a "variable" section, in which some students may answer a different set of questions from other students. For example, a test taken by several thousand students each year contains a common multiple-choice portion and…
Descriptors: Difficulty Level, Error of Measurement, Essay Tests, Mathematical Models
Zwick, Rebecca – 1986
Although perfectly scalable items rarely occur in practice, Guttman's concept of a scale has proved to be valuable to the development of measurement theory. If the score distribution is uniform and there is an equal number of items at each difficulty level, both the elements and the eigenvalues of the Pearson correlation matrix of dichotomous…
Descriptors: Correlation, Difficulty Level, Item Analysis, Latent Trait Theory
Eignor, Daniel R.; Stocking, Martha L. – 1986
A previous study of pre-equating the Scholastic Aptitude Test (SAT) using item response theory provided unacceptable equating results for SAT-mathematical data. The purpose of this study was to investigate two possible explanations for these unacceptable pre-equating results. Specifically, the calibration process, which made use of the…
Descriptors: College Entrance Examinations, Equated Scores, Higher Education, Latent Trait Theory
Ackerman, Terry A. – 1987
The purpose of this study was to investigate the effect of using multidimensional items in a computer adaptive test (CAT) setting which assumes a unidimensional item response theory (IRT) framework. Previous research has suggested that the composite of multidimensional abilities being estimated by a unidimensional IRT model is not constant…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Computer Simulation
Sarvela, Paul D. – 1986
Four discrimination indices were compared, using score distributions which were normal, bimodal, and negatively skewed. The score distributions were systematically varied to represent the common circumstances of a military training situation using criterion-referenced mastery tests. Three 20-item tests were administered to 110 simulated subjects.…
Descriptors: Comparative Analysis, Criterion Referenced Tests, Item Analysis, Mastery Tests
Weiss, David J., Ed. – 1985
This report contains the Proceedings of the 1982 Item Response Theory and Computerized Adaptive Testing Conference. The papers and their discussions are organized into eight sessions: (1) "Developments in Latent Trait Theory," with papers by Fumiko Samejima and Michael V. Levine; (2) "Parameter Estimation," with papers by…
Descriptors: Achievement Tests, Adaptive Testing, Branching, Computer Assisted Testing
Hambleton, Ronald K.; Rogers, H. Jane – 1986
This report was designed to respond to two major methodological shortcomings in the item bias literature: (1) misfitting test models; and (2) the use of significance tests. Specifically, the goals of the research were to describe a newly developed method known as the "plot method" for identifying potentially biased test items and to…
Descriptors: Criterion Referenced Tests, Culture Fair Tests, Difficulty Level, Estimation (Mathematics)