Descriptor
Source
Author
Ackerman, Terry A. | 1 |
Buhr, Dianne C. | 1 |
Dovell, Patricia | 1 |
Forster, Fred | 1 |
Garrido, Mariquita | 1 |
Livingston, Samuel A. | 1 |
McKinley, Robert L. | 1 |
Payne, David A. | 1 |
Reckase, Mark D. | 1 |
Thomas, Gregory P. | 1 |
Publication Type
Speeches/Meeting Papers | 7 |
Reports - Research | 6 |
Opinion Papers | 1 |
Education Level
Audience
Researchers | 7 |
Location
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
What Works Clearinghouse Rating
Reckase, Mark D.; McKinley, Robert L. – 1984
The purpose of this paper is to present a generalization of the concept of item difficulty to test items that measure more than one dimension. Three common definitions of item difficulty were considered: the proportion of correct responses for a group of individuals; the probability of a correct response to an item for a specific person; and the…
Descriptors: Difficulty Level, Item Analysis, Latent Trait Theory, Mathematical Models
Livingston, Samuel A. – 1986
This paper deals with test fairness regarding a test consisting of two parts: (1) a "common" section, taken by all students; and (2) a "variable" section, in which some students may answer a different set of questions from other students. For example, a test taken by several thousand students each year contains a common multiple-choice portion and…
Descriptors: Difficulty Level, Error of Measurement, Essay Tests, Mathematical Models
Forster, Fred – 1987
Studies carried out over a 12-year period addressed fundamental questions on the use of Rasch-based item banks. Large field tests administered in grades 3-8 of reading, mathematics, and science items, as well as standardized test results were used to explore the possible effects of many factors on item calibrations. In general, the results…
Descriptors: Achievement Tests, Difficulty Level, Elementary Education, Item Analysis
Dovell, Patricia; Buhr, Dianne C. – 1986
This study examined the difficulty level of essay topics used in the large-scale assessment of writing in relation to five different scoring models, and sought to determine what effects the scoring models would have on passing rates. In model one, examinee's score is the direct result of a score assigned by the reader or the sum of scores assigned…
Descriptors: College Students, Difficulty Level, Essay Tests, Essays
Ackerman, Terry A. – 1987
The purpose of this study was to investigate the effect of using multidimensional items in a computer adaptive test (CAT) setting which assumes a unidimensional item response theory (IRT) framework. Previous research has suggested that the composite of multidimensional abilities being estimated by a unidimensional IRT model is not constant…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Computer Simulation
Thomas, Gregory P. – 1986
This paper argues that no single measurement strategy serves all purposes and that applying methods and techniques which allow a variety of data elements to be retrieved and juxtaposed may be an investment in the future. Item response theory, Rasch model, and latent trait theory are all approaches to a single conceptual topic. An abbreviated look…
Descriptors: Achievement Tests, Adaptive Testing, Criterion Referenced Tests, Data Collection
Garrido, Mariquita; Payne, David A. – 1987
Minimum competency cut-off scores on a statistics exam were estimated under four conditions: the Angoff judging method with item data (n=20), and without data available (n=19); and the Modified Angoff method with (n=19), and without (n=19) item data available to judges. The Angoff method required free response percentage estimates (0-100) percent,…
Descriptors: Academic Standards, Comparative Analysis, Criterion Referenced Tests, Cutting Scores