Descriptor
Author
Johnson, Matthew S. | 6 |
Sinharay, Sandip | 4 |
Williamson, David M. | 2 |
Bejar, Isaac I. | 1 |
Jenkins, Frank | 1 |
Junker, Brian W. | 1 |
Publication Type
Journal Articles | 4 |
Reports - Research | 3 |
Reports - Descriptive | 2 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Education | 1 |
Grade 8 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
What Works Clearinghouse Rating
Johnson, Matthew S.; Sinharay, Sandip – Applied Psychological Measurement, 2005
For complex educational assessments, there is an increasing use of item families, which are groups of related items. Calibration or scoring in an assessment involving item families requires models that can take into account the dependence structure inherent among the items that belong to the same item family. This article extends earlier works in…
Descriptors: National Competency Tests, Markov Processes, Bayesian Statistics
Sinharay, Sandip; Johnson, Matthew S.; Williamson, David M. – Journal of Educational and Behavioral Statistics, 2003
Item families, which are groups of related items, are becoming increasingly popular in complex educational assessments. For example, in automatic item generation (AIG) systems, a test may consist of multiple items generated from each of a number of item models. Item calibration or scoring for such an assessment requires fitting models that can…
Descriptors: Test Items, Markov Processes, Educational Testing, Probability
Johnson, Matthew S.; Sinharay, Sandip – 2003
For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…
Descriptors: Bayesian Statistics, Constructed Response, Educational Assessment, Estimation (Mathematics)
Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I. – 2002
This paper explores the application of a technique for hierarchical item response theory (IRT) calibration of complex constructed response tasks that has promise both as a calibration tool and as a means of evaluating the isomorphic equivalence of complex constructed response tasks. Isomorphic tasks are explicitly and rigorously designed to be…
Descriptors: Bayesian Statistics, Constructed Response, Estimation (Mathematics), Evaluation Methods
Using Data Augmentation and Markov Chain Monte Carlo for the Estimation of Unfolding Response Models
Johnson, Matthew S.; Junker, Brian W. – Journal of Educational and Behavioral Statistics, 2003
Unfolding response models, a class of item response theory (IRT) models that assume a unimodal item response function (IRF), are often used for the measurement of attitudes. Verhelst and Verstralen (1993)and Andrich and Luo (1993) independently developed unfolding response models by relating the observed responses to a more common monotone IRT…
Descriptors: Markov Processes, Item Response Theory, Computation, Data Analysis
Johnson, Matthew S.; Jenkins, Frank – ETS Research Report Series, 2005
Large-scale educational assessments such as the National Assessment of Educational Progress (NAEP) sample examinees to whom an exam will be administered. In most situations the sampling design is not a simple random sample and must be accounted for in the estimating model. After reviewing the current operational estimation procedure for NAEP, this…
Descriptors: Bayesian Statistics, Hierarchical Linear Modeling, National Competency Tests, Sampling