Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 2 |
Descriptor
Sample Size | 3 |
Scoring | 3 |
Test Length | 3 |
Item Response Theory | 2 |
Accuracy | 1 |
Algorithms | 1 |
Computation | 1 |
Computer Simulation | 1 |
Estimation (Mathematics) | 1 |
Evaluation Methods | 1 |
Item Bias | 1 |
More ▼ |
Author
Ankenmann, Robert D. | 1 |
Chenchen Ma | 1 |
Chun Wang | 1 |
Gongjun Xu | 1 |
Hoyle, Rick H. | 1 |
Jing Ouyang | 1 |
Nay, Sandra | 1 |
Stone, Clement A. | 1 |
Yang, Chongming | 1 |
Publication Type
Reports - Research | 2 |
Journal Articles | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Chenchen Ma; Jing Ouyang; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Survey instruments and assessments are frequently used in many domains of social science. When the constructs that these assessments try to measure become multifaceted, multidimensional item response theory (MIRT) provides a unified framework and convenient statistical tool for item analysis, calibration, and scoring. However, the computational…
Descriptors: Algorithms, Item Response Theory, Scoring, Accuracy
Yang, Chongming; Nay, Sandra; Hoyle, Rick H. – Applied Psychological Measurement, 2010
Lengthy scales or testlets pose certain challenges for structural equation modeling (SEM) if all the items are included as indicators of a latent construct. Three general approaches to modeling lengthy scales in SEM (parceling, latent scoring, and shortening) have been reviewed and evaluated. A hypothetical population model is simulated containing…
Descriptors: Structural Equation Models, Measures (Individuals), Sample Size, Item Response Theory
Ankenmann, Robert D.; Stone, Clement A. – 1992
Effects of test length, sample size, and assumed ability distribution were investigated in a multiple replication Monte Carlo study under the 1-parameter (1P) and 2-parameter (2P) logistic graded model with five score levels. Accuracy and variability of item parameter and ability estimates were examined. Monte Carlo methods were used to evaluate…
Descriptors: Computer Simulation, Estimation (Mathematics), Item Bias, Mathematical Models