Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 7 |
Descriptor
Monte Carlo Methods | 8 |
Psychometrics | 8 |
Test Items | 8 |
Item Response Theory | 6 |
Error of Measurement | 3 |
Markov Processes | 3 |
Models | 3 |
Accuracy | 2 |
Computer Assisted Testing | 2 |
Difficulty Level | 2 |
Evaluation Methods | 2 |
More ▼ |
Source
Psychometrika | 2 |
Annenberg Institute for… | 1 |
Educational Sciences: Theory… | 1 |
Journal of Educational… | 1 |
Journal of Psychoeducational… | 1 |
ProQuest LLC | 1 |
Author
AlGhamdi, Hannan M. | 1 |
Bacon, Tina P. | 1 |
Bilir, Mustafa Kuzey | 1 |
Fox, J. P. | 1 |
Hoshino, Takahiro | 1 |
James S. Kim | 1 |
Jiao, Hong | 1 |
Jin, Ying | 1 |
Joshua B. Gilbert | 1 |
Kamata, Akihito | 1 |
Klein Entink, R. H. | 1 |
More ▼ |
Publication Type
Reports - Research | 6 |
Journal Articles | 5 |
Dissertations/Theses -… | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 3 | 1 |
Primary Education | 1 |
Audience
Location
Saudi Arabia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Cognitive Abilities Test | 1 |
What Works Clearinghouse Rating
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Sengul Avsar, Asiye; Tavsancil, Ezel – Educational Sciences: Theory and Practice, 2017
This study analysed polytomous items' psychometric properties according to nonparametric item response theory (NIRT) models. Thus, simulated datasets--three different test lengths (10, 20 and 30 items), three sample distributions (normal, right and left skewed) and three samples sizes (100, 250 and 500)--were generated by conducting 20…
Descriptors: Test Items, Psychometrics, Nonparametric Statistics, Item Response Theory
Jiao, Hong; Kamata, Akihito; Wang, Shudong; Jin, Ying – Journal of Educational Measurement, 2012
The applications of item response theory (IRT) models assume local item independence and that examinees are independent of each other. When a representative sample for psychometric analysis is selected using a cluster sampling method in a testlet-based assessment, both local item dependence and local person dependence are likely to be induced.…
Descriptors: Item Response Theory, Test Items, Markov Processes, Monte Carlo Methods
Bilir, Mustafa Kuzey – ProQuest LLC, 2009
This study uses a new psychometric model (mixture item response theory-MIMIC model) that simultaneously estimates differential item functioning (DIF) across manifest groups and latent classes. Current DIF detection methods investigate DIF from only one side, either across manifest groups (e.g., gender, ethnicity, etc.), or across latent classes…
Descriptors: Test Items, Testing Programs, Markov Processes, Psychometrics
Klein Entink, R. H.; Fox, J. P.; van der Linden, W. J. – Psychometrika, 2009
Response times on test items are easily collected in modern computerized testing. When collecting both (binary) responses and (continuous) response times on test items, it is possible to measure the accuracy and speed of test takers. To study the relationships between these two constructs, the model is extended with a multivariate multilevel…
Descriptors: Test Items, Markov Processes, Item Response Theory, Measurement Techniques
Miyazaki, Kei; Hoshino, Takahiro; Mayekawa, Shin-ichi; Shigemasu, Kazuo – Psychometrika, 2009
This study proposes a new item parameter linking method for the common-item nonequivalent groups design in item response theory (IRT). Previous studies assumed that examinees are randomly assigned to either test form. However, examinees can frequently select their own test forms and tests often differ according to examinees' abilities. In such…
Descriptors: Test Format, Item Response Theory, Test Items, Test Bias
Kromrey, Jeffrey D.; Bacon, Tina P. – 1992
A Monte Carlo study was conducted to estimate the small sample standard errors and statistical bias of psychometric statistics commonly used in the analysis of achievement tests. The statistics examined in this research were: (1) the index of item difficulty; (2) the index of item discrimination; (3) the corrected item-total point-biserial…
Descriptors: Achievement Tests, Comparative Analysis, Difficulty Level, Estimation (Mathematics)