NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Jorge Salas – Journal of Educational Measurement, 2024
Despite the growing interest in incorporating response time data into item response models, there has been a lack of research investigating how the effect of speed on the probability of a correct response varies across different groups (e.g., experimental conditions) for various items (i.e., differential response time item analysis). Furthermore,…
Descriptors: Item Response Theory, Reaction Time, Models, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Changjiang; Gierl, Mark J. – Journal of Educational Measurement, 2011
The purpose of this study is to apply the attribute hierarchy method (AHM) to a subset of SAT critical reading items and illustrate how the method can be used to promote cognitive diagnostic inferences. The AHM is a psychometric procedure for classifying examinees' test item responses into a set of attribute mastery patterns associated with…
Descriptors: Reading Comprehension, Test Items, Critical Reading, Protocol Analysis
Peer reviewed Peer reviewed
Lee, Guemin – Journal of Educational Measurement, 2002
Studied the effects of items, passages, contents, themes, and types of passages on the reliability and standard errors of measurement for complex reading comprehension tests using seven different generalizability theory models. Results suggest that passages and themes should be taken into account when evaluating the reliability of test scores for…
Descriptors: Error of Measurement, Generalizability Theory, Models, Reading Comprehension
Peer reviewed Peer reviewed
Thissen, David; And Others – Journal of Educational Measurement, 1989
An approach to scoring reading comprehension based on the concept of the testlet is described, using models developed for items in multiple categories. The model is illustrated using data from 3,866 examinees. Application of testlet scoring to multiple category models developed for individual items is discussed. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Mathematical Models
Peer reviewed Peer reviewed
Wardrop, James L.; And Others – Journal of Educational Measurement, 1982
A structure for describing different approaches to testing is generated by identifying five dimensions along which tests differ: test uses, item generation, item revision, assessment of precision, and validation. These dimensions are used to profile tests of reading comprehension. Only norm-referenced achievement tests had an inference system…
Descriptors: Achievement Tests, Comparative Analysis, Educational Testing, Models
Peer reviewed Peer reviewed
Haertel, Edward – Journal of Educational Measurement, 1984
Multiple-choice reading comprehension items from a conventional, norm-referenced reading comprehension test were successfully analyzed using a simple latent class model. Results support the use of latent class, state mastery models with more heterogenous item pools than has been previously advocated. (Author/PN)
Descriptors: Grade 4, Intermediate Grades, Item Analysis, Latent Trait Theory
Peer reviewed Peer reviewed
Sireci, Stephen G.; And Others – Journal of Educational Measurement, 1991
Calculating the reliability of a testlet-based test is demonstrated using data from 1,812 males and 2,216 females taking the Scholastic Aptitude Test verbal section and 3,866 examinees taking another reading test. Traditional reliabilities calculated on reading comprehension tests constructed of four testlets provided substantial overestimates.…
Descriptors: College Entrance Examinations, Equations (Mathematics), Estimation (Mathematics), High School Students
Peer reviewed Peer reviewed
Haertel, Edward H. – Journal of Educational Measurement, 1989
A method is presented for using a restricted latent class model--a binary skills model--to determine skills required by a set of test items. Results from the method's application to reading achievement data for about 63,000 fourth graders indicate that the model offers useful information on test structure and examinee ability. (TJH)
Descriptors: Achievement Tests, Elementary School Students, Factor Structure, Grade 4