Descriptor
Item Analysis | 29 |
Mathematical Models | 29 |
Test Validity | 29 |
Test Construction | 14 |
Test Reliability | 13 |
Test Items | 12 |
Latent Trait Theory | 11 |
Achievement Tests | 9 |
Statistical Analysis | 9 |
Criterion Referenced Tests | 8 |
Difficulty Level | 7 |
More ▼ |
Author
Reckase, Mark D. | 3 |
Forster, Fred | 2 |
Bart, William M. | 1 |
Canner, Jane M. | 1 |
Carlson, James E. | 1 |
Cliff, Norman | 1 |
Drenth, Pieter J. D. | 1 |
Emrick, John A. | 1 |
Engelhard, George, Jr. | 1 |
Faggen, Jane | 1 |
Gleser, Leon Jay | 1 |
More ▼ |
Publication Type
Reports - Research | 20 |
Speeches/Meeting Papers | 10 |
Journal Articles | 3 |
Collected Works - Proceedings | 1 |
Guides - General | 1 |
Numerical/Quantitative Data | 1 |
Reference Materials -… | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Researchers | 5 |
Location
Surinam | 1 |
Laws, Policies, & Programs
Assessments and Surveys
California Achievement Tests | 1 |
Stanford Achievement Tests | 1 |
Stanford Binet Intelligence… | 1 |
Stanford Diagnostic Reading… | 1 |
Stanford Early School… | 1 |
What Works Clearinghouse Rating

Bart, William M.; Williams-Morris, Ruth – Applied Measurement in Education, 1990
Refined item digraph analysis (RIDA) is a way of studying diagnostic and prescriptive testing. It permits assessment of a test item's diagnostic value by examining the extent to which the item has properties of ideal items. RIDA is illustrated with the Orange Juice Test, which assesses the proportionality concept. (TJH)
Descriptors: Diagnostic Tests, Evaluation Methods, Item Analysis, Mathematical Models
Emrick, John A. – 1971
The validity of an evaluation model for mastery testing applications was investigated. Three variables were tested in an experiment using 96 third grade subjects--amount of training, number of alternates in an item, and number of items. The concept hierarchy involved an orderly progression from a concept involving one relevant of three varying…
Descriptors: Achievement Tests, Cognitive Measurement, Item Analysis, Mathematical Models
Holmes, Susan E. – 1982
The purpose of the present study was to examine the accuracy of indirect trait estimates, i.e., estimates of some primary trait obtained from a second measure which have been equated to the first. The California Achievement Test in Reading was the primary measure and the Prescriptive Reading Inventory was the indirect measure. Four kinds of…
Descriptors: Content Analysis, Elementary Education, Equated Scores, Item Analysis

Whitely, Susan E. – Journal of Educational Measurement, 1977
A debate concerning specific issues and the general usefulness of the Rasch latent trait test model is continued. Methods of estimation, necessary sample size, and the applicability of the model are discussed. (JKS)
Descriptors: Error of Measurement, Item Analysis, Mathematical Models, Measurement

Wright, Benjamin D. – Journal of Educational Measurement, 1977
Statements made in a previous article of this journal concerning the Rasch latent trait test model are questioned. Methods of estimation, necessary sample sizes, several formuli, and the general usefulness of the Rasch model are discussed. (JKS)
Descriptors: Computers, Error of Measurement, Item Analysis, Mathematical Models

Wilson, Mark – Journal for Research in Mathematics Education, 1990
Summarizes a reanalysis of the data from an investigation of a test designed to measure a learning sequence in geometry based on the work of van Hiele (1986). Discusses the test based on the Rasch model. (YP)
Descriptors: Geometric Concepts, Geometry, Item Analysis, Mathematical Concepts
Smith, Douglas U. – 1978
This study examined the effects of certain item selection methods on the classification accuracy and classification consistency of criterion-referenced instruments. Three item response data sets, representing varying situations of instructional effectiveness, were simulated. Five methods of item selection were then applied to each data set for the…
Descriptors: Criterion Referenced Tests, Item Analysis, Item Sampling, Latent Trait Theory
Muthen, Bengt – 1986
The use of new extension of standard Item Response Theory (IRT) modeling of dichotomous items to include external variables is proposed. External variables may appear both as categorical grouping variables and as continuous variables; this requires the formulation of a model for the relationships between the external variables and the response…
Descriptors: Achievement Tests, Algebra, Computer Simulation, Grade 8
Reckase, Mark D. – 1978
Five comparisons were made relative to the quality of estimates of ability parameters and item calibrations obtained from the one-parameter and three-parameter logistic models. The results indicate: (1) The three-parameter model fit the test data better in all cases than did the one-parameter model. For simulation data sets, multi-factor data were…
Descriptors: Comparative Analysis, Goodness of Fit, Item Analysis, Mathematical Models
Faggen, Jane – 1978
Formulas are presented for decision reliability and for classification validity for mastery/nonmastery decisions based on criterion referenced tests. Two item parameters are used: the probability of a master answering an item correctly, and the probability of a nonmaster answering an item incorrectly. The theory explores the relationships of…
Descriptors: Bayesian Statistics, Criterion Referenced Tests, Item Analysis, Item Banks
Gleser, Leon Jay – 1971
An attempt is made to indicate why the concept of "true score" naturally leads to the belief that test validity must increase with an increase in test and/or average item reliability, and why this is correct for the classical single-factor model first introduced by Spearman. The statistical model used by Loevinger is introduced to…
Descriptors: Factor Analysis, Item Analysis, Mathematical Models, Measurement Techniques
Green, Donald Ross – 1976
During the past few years the problem of bias in testing has become an increasingly important issue. In most research, bias refers to the fair use of tests and has thus been defined in terms of an outside criterion measure of the performance being predicted by the test. Recently however, there has been growing interest in assessing bias when such…
Descriptors: Achievement Tests, Item Analysis, Mathematical Models, Minority Groups
Muraki, Eiji; Engelhard, George, Jr. – 1985
Recent developments in dichotomous factor analysis based on multidimensional item response models (Bock and Aitkin, 1981; Muthen, 1978) provide an effective method for exploring the dimensionality of questionnaire items. Implemented in the TESTFACT program, this "full information" item factor analysis accounts not only for the pairwise…
Descriptors: Elementary Education, Estimation (Mathematics), Factor Analysis, Item Analysis

Secolsky, Charles – Journal of Educational Measurement, 1983
A model is presented using examinee judgements in detecting ambiguous/misinterpreted items on teacher-made criterion-referenced tests. A computational example and guidelines for constructing domain categories and interpreting the indices are presented. (Author/PN)
Descriptors: Criterion Referenced Tests, Higher Education, Item Analysis, Mathematical Models
Harris, Chester W.; And Others – 1977
The implications of a mathematical model of test scores are explored where the data are limited to a random sample of items without replacement from an indefinitely large population or item domain in which items are scored either zero or one. The purpose is to obtain an unbiased estimate of a student's proportion of items correct in the item…
Descriptors: Academic Achievement, Achievement Tests, Annotated Bibliographies, Bibliographies
Previous Page | Next Page ยป
Pages: 1 | 2