Publication Date
In 2025 | 0 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 9 |
Since 2016 (last 10 years) | 17 |
Since 2006 (last 20 years) | 33 |
Descriptor
Source
Author
Gierl, Mark J. | 2 |
Allan, Marjorie | 1 |
Arenson, Ethan A. | 1 |
Berliner, David C. | 1 |
Black, Beth | 1 |
Bolt, Daniel M. | 1 |
Bramley, Tom | 1 |
Cai, Li | 1 |
Cawthon, Stephanie | 1 |
Chang, Wanchen | 1 |
Chengyu Cui | 1 |
More ▼ |
Publication Type
Education Level
Elementary Secondary Education | 34 |
Elementary Education | 6 |
Grade 8 | 6 |
Middle Schools | 5 |
Secondary Education | 5 |
Junior High Schools | 4 |
Grade 4 | 3 |
Higher Education | 3 |
Grade 12 | 2 |
Postsecondary Education | 2 |
Adult Education | 1 |
More ▼ |
Audience
Policymakers | 2 |
Practitioners | 2 |
Teachers | 2 |
Parents | 1 |
Location
Singapore | 3 |
Japan | 2 |
South Korea | 2 |
Taiwan | 2 |
United States | 2 |
Asia | 1 |
Australia | 1 |
Canada | 1 |
Germany | 1 |
Hong Kong | 1 |
Indonesia | 1 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Individuals with Disabilities… | 1 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
Trends in International… | 13 |
National Assessment of… | 2 |
Big Five Inventory | 1 |
Florida Comprehensive… | 1 |
Program for International… | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Emily A. Brown – ProQuest LLC, 2024
Previous research has been limited regarding the measurement of computational thinking, particularly as a learning progression in K-12. This study proposes to apply a multidimensional item response theory (IRT) model to a newly developed measure of computational thinking utilizing both selected response and open-ended polytomous items to establish…
Descriptors: Models, Computation, Thinking Skills, Item Response Theory
Lawrence T. DeCarlo – Educational and Psychological Measurement, 2024
A psychological framework for different types of items commonly used with mixed-format exams is proposed. A choice model based on signal detection theory (SDT) is used for multiple-choice (MC) items, whereas an item response theory (IRT) model is used for open-ended (OE) items. The SDT and IRT models are shown to share a common conceptualization…
Descriptors: Test Format, Multiple Choice Tests, Item Response Theory, Models
Chung, Seungwon; Cai, Li – Journal of Educational and Behavioral Statistics, 2021
In the research reported here, we propose a new method for scale alignment and test scoring in the context of supporting students with disabilities. In educational assessment, students from these special populations take modified tests because of a demonstrated disability that requires more assistance than standard testing accommodation. Updated…
Descriptors: Students with Disabilities, Scoring, Achievement Tests, Test Items
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Lin, Jing-Wen; Yu, Ruan-Ching – Asia Pacific Journal of Education, 2022
Modelling ability is one of the essential elements of the latest educational reforms, and Trends in International Mathematics and Science Study (TIMSS) is a curriculum-based assessment which allows educational systems worldwide to inspect the curricular influences. The aims of this study were to examine the role of modelling ability in the…
Descriptors: Grade 8, Educational Change, Cross Cultural Studies, Test Items
von Davier, Matthias; Tyack, Lillian; Khorramdel, Lale – Educational and Psychological Measurement, 2023
Automated scoring of free drawings or images as responses has yet to be used in large-scale assessments of student achievement. In this study, we propose artificial neural networks to classify these types of graphical responses from a TIMSS 2019 item. We are comparing classification accuracy of convolutional and feed-forward approaches. Our…
Descriptors: Scoring, Networks, Artificial Intelligence, Elementary Secondary Education
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items
Yanan Feng – ProQuest LLC, 2021
This dissertation aims to investigate the effect size measures of differential item functioning (DIF) detection in the context of cognitive diagnostic models (CDMs). A variety of DIF detection techniques have been developed in the context of CDMs. However, most of the DIF detection procedures focus on the null hypothesis significance test. Few…
Descriptors: Effect Size, Item Response Theory, Cognitive Measurement, Models
Oluwalana, Olasumbo O. – ProQuest LLC, 2019
A primary purpose of cognitive diagnosis models (CDMs) is to classify examinees based on their attribute patterns. The Q-matrix (Tatsuoka, 1985), a common component of all CDMs, specifies the relationship between the set of required dichotomous attributes and the test items. Since a Q-matrix is often developed by content-knowledge experts and can…
Descriptors: Classification, Validity, Test Items, International Assessment
Weber, Melissa R.; Lotyczewski, Bohdan S.; Montes, Guillermo; Hightower, A. Dirk; Allan, Marjorie – Journal of Psychoeducational Assessment, 2017
The factor structure of the Teacher-Child Rating Scale (T-CRS 2.1) was examined using confirmatory factor analysis (CFA). A cross-sectional study was carried out on 68,497 children in prekindergarten through Grade 10. Item reduction was carried out based on modification indices, standardized residual covariance, and standardized factor loadings. A…
Descriptors: Rating Scales, Factor Structure, Children, Test Items
Dirlik, Ezgi Mor – International Journal of Progressive Education, 2019
Item response theory (IRT) has so many advantages than its precedent Classical Test Theory (CTT) such as non-changing item parameters, ability parameter estimations free from the items. However, in order to get these advantages, some assumptions should be met and they are; unidimensionality, normality and local independence. However, it is not…
Descriptors: Comparative Analysis, Nonparametric Statistics, Item Response Theory, Models
George, Ann Cathrice; Robitzsch, Alexander – Applied Measurement in Education, 2018
This article presents a new perspective on measuring gender differences in the large-scale assessment study Trends in International Science Study (TIMSS). The suggested empirical model is directly based on the theoretical competence model of the domain mathematics and thus includes the interaction between content and cognitive sub-competencies.…
Descriptors: Achievement Tests, Elementary Secondary Education, Mathematics Achievement, Mathematics Tests
Kogar, Esin Yilmaz; Kelecioglu, Hülya – Journal of Education and Learning, 2017
The purpose of this research is to first estimate the item and ability parameters and the standard error values related to those parameters obtained from Unidimensional Item Response Theory (UIRT), bifactor (BIF) and Testlet Response Theory models (TRT) in the tests including testlets, when the number of testlets, number of independent items, and…
Descriptors: Item Response Theory, Models, Mathematics Tests, Test Items
Arenson, Ethan A.; Karabatsos, George – Grantee Submission, 2017
Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…
Descriptors: Bayesian Statistics, Item Response Theory, Nonparametric Statistics, Models