Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 5 |
Descriptor
Models | 6 |
Scoring | 5 |
Automation | 2 |
Item Response Theory | 2 |
Test Items | 2 |
Accuracy | 1 |
Adaptive Testing | 1 |
Case Studies | 1 |
Classification | 1 |
College Entrance Examinations | 1 |
Comparative Analysis | 1 |
More ▼ |
Source
International Journal of… | 6 |
Author
Bradshaw, Laine P. | 1 |
Breyer, F. Jay | 1 |
Buckendahl, Chad W. | 1 |
Cao, Mengyang | 1 |
Geranpayeh, Ardeshir | 1 |
Johnson, Matthew S. | 1 |
Khalifa, Hanan | 1 |
Lim, Gad S. | 1 |
Madison, Matthew J. | 1 |
Rupp, Andre A. | 1 |
Sinharay, Sandip | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 4 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
International English… | 1 |
What Works Clearinghouse Rating
Cao, Mengyang; Song, Q. Chelsea; Tay, Louis – International Journal of Testing, 2018
There is a growing use of noncognitive assessments around the world, and recent research has posited an ideal point response process underlying such measures. A critical issue is whether the typical use of dominance approaches (e.g., average scores, factor analysis, and the Samejima's graded response model) in scoring such measures is adequate.…
Descriptors: Comparative Analysis, Item Response Theory, Factor Analysis, Models
Bradshaw, Laine P.; Madison, Matthew J. – International Journal of Testing, 2016
In item response theory (IRT), the invariance property states that item parameter estimates are independent of the examinee sample, and examinee ability estimates are independent of the test items. While this property has long been established and understood by the measurement community for IRT models, the same cannot be said for diagnostic…
Descriptors: Classification, Models, Simulation, Psychometrics
Zhang, Mo; Williamson, David M.; Breyer, F. Jay; Trapani, Catherine – International Journal of Testing, 2012
This article describes two separate, related studies that provide insight into the effectiveness of "e-rater" score calibration methods based on different distributional targets. In the first study, we developed and evaluated a new type of "e-rater" scoring model that was cost-effective and applicable under conditions of absent human rating and…
Descriptors: Automation, Scoring, Models, Essay Tests
Lim, Gad S.; Geranpayeh, Ardeshir; Khalifa, Hanan; Buckendahl, Chad W. – International Journal of Testing, 2013
Standard setting theory has largely developed with reference to a typical situation, determining a level or levels of performance for one exam for one context. However, standard setting is now being used with international reference frameworks, where some parameters and assumptions of classical standard setting do not hold. We consider the…
Descriptors: Standard Setting (Scoring), Validity, Models, Language Tests
Sinharay, Sandip; Johnson, Matthew S. – International Journal of Testing, 2008
"Item models" (LaDuca, Staples, Templeton, & Holzman, 1986) are classes from which it is possible to generate items that are equivalent/isomorphic to other items from the same model (e.g., Bejar, 1996, 2002). They have the potential to produce large numbers of high-quality items at reduced cost. This article introduces data from an…
Descriptors: College Entrance Examinations, Case Studies, Test Items, Models
Rupp, Andre A. – International Journal of Testing, 2003
Item response theory (IRT) has become one of the most popular scoring frameworks for measurement data. IRT models are used frequently in computerized adaptive testing, cognitively diagnostic assessment, and test equating. This article reviews two of the most popular software packages for IRT model estimation, BILOG-MG (Zimowski, Muraki, Mislevy, &…
Descriptors: Test Items, Adaptive Testing, Item Response Theory, Computer Software