Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 14 |
Since 2006 (last 20 years) | 18 |
Descriptor
Source
Author
Lee, Young-Sun | 2 |
Park, Yoon Soo | 2 |
Arenson, Ethan A. | 1 |
Bolt, Daniel M. | 1 |
Chengyu Cui | 1 |
Choi, Kyong Mi | 1 |
Chun Wang | 1 |
Daniel M. Bolt | 1 |
Dirlik, Ezgi Mor | 1 |
Fan, Xitao | 1 |
George, Ann Cathrice | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 12 |
Dissertations/Theses -… | 3 |
Reports - Evaluative | 2 |
Reports - Descriptive | 1 |
Education Level
Elementary Secondary Education | 13 |
Elementary Education | 6 |
Grade 8 | 6 |
Junior High Schools | 5 |
Middle Schools | 5 |
Secondary Education | 5 |
Grade 4 | 3 |
Intermediate Grades | 2 |
Grade 3 | 1 |
Grade 7 | 1 |
Audience
Location
United States | 3 |
Singapore | 2 |
South Korea | 2 |
Asia | 1 |
Germany | 1 |
Hong Kong | 1 |
Japan | 1 |
Massachusetts | 1 |
Minnesota | 1 |
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 18 |
Big Five Inventory | 1 |
What Works Clearinghouse Rating
Lawrence T. DeCarlo – Educational and Psychological Measurement, 2024
A psychological framework for different types of items commonly used with mixed-format exams is proposed. A choice model based on signal detection theory (SDT) is used for multiple-choice (MC) items, whereas an item response theory (IRT) model is used for open-ended (OE) items. The SDT and IRT models are shown to share a common conceptualization…
Descriptors: Test Format, Multiple Choice Tests, Item Response Theory, Models
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Ma, Wenchao; de la Torre, Jimmy – Journal of Educational and Behavioral Statistics, 2019
Solving a constructed-response item usually requires successfully performing a sequence of tasks. Each task could involve different attributes, and those required attributes may be "condensed" in various ways to produce the responses. The sequential generalized deterministic input noisy "and" gate model is a general cognitive…
Descriptors: Test Items, Cognitive Measurement, Models, Hypothesis Testing
Lin, Jing-Wen; Yu, Ruan-Ching – Asia Pacific Journal of Education, 2022
Modelling ability is one of the essential elements of the latest educational reforms, and Trends in International Mathematics and Science Study (TIMSS) is a curriculum-based assessment which allows educational systems worldwide to inspect the curricular influences. The aims of this study were to examine the role of modelling ability in the…
Descriptors: Grade 8, Educational Change, Cross Cultural Studies, Test Items
von Davier, Matthias; Tyack, Lillian; Khorramdel, Lale – Educational and Psychological Measurement, 2023
Automated scoring of free drawings or images as responses has yet to be used in large-scale assessments of student achievement. In this study, we propose artificial neural networks to classify these types of graphical responses from a TIMSS 2019 item. We are comparing classification accuracy of convolutional and feed-forward approaches. Our…
Descriptors: Scoring, Networks, Artificial Intelligence, Elementary Secondary Education
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items
Yanan Feng – ProQuest LLC, 2021
This dissertation aims to investigate the effect size measures of differential item functioning (DIF) detection in the context of cognitive diagnostic models (CDMs). A variety of DIF detection techniques have been developed in the context of CDMs. However, most of the DIF detection procedures focus on the null hypothesis significance test. Few…
Descriptors: Effect Size, Item Response Theory, Cognitive Measurement, Models
Oluwalana, Olasumbo O. – ProQuest LLC, 2019
A primary purpose of cognitive diagnosis models (CDMs) is to classify examinees based on their attribute patterns. The Q-matrix (Tatsuoka, 1985), a common component of all CDMs, specifies the relationship between the set of required dichotomous attributes and the test items. Since a Q-matrix is often developed by content-knowledge experts and can…
Descriptors: Classification, Validity, Test Items, International Assessment
Dirlik, Ezgi Mor – International Journal of Progressive Education, 2019
Item response theory (IRT) has so many advantages than its precedent Classical Test Theory (CTT) such as non-changing item parameters, ability parameter estimations free from the items. However, in order to get these advantages, some assumptions should be met and they are; unidimensionality, normality and local independence. However, it is not…
Descriptors: Comparative Analysis, Nonparametric Statistics, Item Response Theory, Models
George, Ann Cathrice; Robitzsch, Alexander – Applied Measurement in Education, 2018
This article presents a new perspective on measuring gender differences in the large-scale assessment study Trends in International Science Study (TIMSS). The suggested empirical model is directly based on the theoretical competence model of the domain mathematics and thus includes the interaction between content and cognitive sub-competencies.…
Descriptors: Achievement Tests, Elementary Secondary Education, Mathematics Achievement, Mathematics Tests
Arenson, Ethan A.; Karabatsos, George – Grantee Submission, 2017
Item response models typically assume that the item characteristic (step) curves follow a logistic or normal cumulative distribution function, which are strictly monotone functions of person test ability. Such assumptions can be overly-restrictive for real item response data. We propose a simple and more flexible Bayesian nonparametric IRT model…
Descriptors: Bayesian Statistics, Item Response Theory, Nonparametric Statistics, Models
Choi, Kyong Mi; Lee, Young-Sun; Park, Yoon Soo – EURASIA Journal of Mathematics, Science & Technology Education, 2015
International trended assessments have long attempted to provide instructional information to educational researchers and classroom teachers. Studies have shown that traditional methods of item analysis have not provided specific information that can be directly applicable to improve student performance. To this end, cognitive diagnosis models…
Descriptors: International Assessment, Mathematics Tests, Grade 8, Models
Jin, Ying; Kang, Minsoo – Large-scale Assessments in Education, 2016
Background: The current study compared four differential item functioning (DIF) methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects) simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic…
Descriptors: Comparative Analysis, Test Bias, Simulation, Regression (Statistics)
Oon, Pey-Tee; Fan, Xitao – International Journal of Science Education, 2017
Students' attitude towards science (SAS) is often a subject of investigation in science education research. Survey of rating scale is commonly used in the study of SAS. The present study illustrates how Rasch analysis can be used to provide psychometric information of SAS rating scales. The analyses were conducted on a 20-item SAS scale used in an…
Descriptors: Item Response Theory, Psychometrics, Attitude Measures, Rating Scales
Previous Page | Next Page ยป
Pages: 1 | 2