Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 18 |
Descriptor
Models | 20 |
Test Items | 20 |
Item Response Theory | 10 |
Comparative Analysis | 6 |
Difficulty Level | 4 |
Foreign Countries | 4 |
Goodness of Fit | 4 |
Item Analysis | 4 |
Psychometrics | 4 |
Classification | 3 |
Computer Software | 3 |
More ▼ |
Source
International Journal of… | 20 |
Author
Baghaei, Purya | 2 |
Ackerman, Terry | 1 |
Aryadoust, Vahid | 1 |
Bradshaw, Laine P. | 1 |
Bryant, Damon U. | 1 |
Cassady, Jerrell C. | 1 |
Chen, Shyh-Huei | 1 |
Childs, Ruth A. | 1 |
D'Agostino, Jerome | 1 |
De Boeck, Paul | 1 |
Fu, Yanyan | 1 |
More ▼ |
Publication Type
Journal Articles | 20 |
Reports - Research | 12 |
Reports - Evaluative | 6 |
Book/Product Reviews | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 4 |
Elementary Education | 3 |
Grade 4 | 3 |
Postsecondary Education | 3 |
High Schools | 2 |
Secondary Education | 2 |
Elementary Secondary Education | 1 |
Grade 3 | 1 |
Intermediate Grades | 1 |
Audience
Location
United States | 2 |
Argentina | 1 |
Arizona | 1 |
Belgium | 1 |
Canada | 1 |
China | 1 |
Iran | 1 |
Malaysia | 1 |
Massachusetts | 1 |
Minnesota | 1 |
Philippines | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
International English… | 1 |
National Assessment of… | 1 |
Program for International… | 1 |
Test of English for… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Xiaowen Liu – International Journal of Testing, 2024
Differential item functioning (DIF) often arises from multiple sources. Within the context of multidimensional item response theory, this study examined DIF items with varying secondary dimensions using the three DIF methods: SIBTEST, Mantel-Haenszel, and logistic regression. The effect of the number of secondary dimensions on DIF detection rates…
Descriptors: Item Analysis, Test Items, Item Response Theory, Correlation
Fu, Yanyan; Strachan, Tyler; Ip, Edward H.; Willse, John T.; Chen, Shyh-Huei; Ackerman, Terry – International Journal of Testing, 2020
This research examined correlation estimates between latent abilities when using the two-dimensional and three-dimensional compensatory and noncompensatory item response theory models. Simulation study results showed that the recovery of the latent correlation was best when the test contained 100% of simple structure items for all models and…
Descriptors: Item Response Theory, Models, Test Items, Simulation
Kim, Kyung Yong; Lim, Euijin; Lee, Won-Chan – International Journal of Testing, 2019
For passage-based tests, items that belong to a common passage often violate the local independence assumption of unidimensional item response theory (UIRT). In this case, ignoring local item dependence (LID) and estimating item parameters using a UIRT model could be problematic because doing so might result in inaccurate parameter estimates,…
Descriptors: Item Response Theory, Equated Scores, Test Items, Models
Ravand, Hamdollah; Baghaei, Purya – International Journal of Testing, 2020
More than three decades after their introduction, diagnostic classification models (DCM) do not seem to have been implemented in educational systems for the purposes they were devised. Most DCM research is either methodological for model development and refinement or retrofitting to existing nondiagnostic tests and, in the latter case, basically…
Descriptors: Classification, Models, Diagnostic Tests, Test Construction
Wang, Ting; Li, Min; Thummaphan, Phonraphee; Ruiz-Primo, Maria Araceli – International Journal of Testing, 2017
Contextualized items have been widely used in science testing. Despite common use of item contexts, how the influence of a chosen context on the reliability and validity of the score inferences remains unclear. We focused on sequential cues of contextual information, referring to the order of events or descriptions presented in item contexts. We…
Descriptors: Science Tests, Cues, Difficulty Level, Test Items
Jurich, Daniel P.; Bradshaw, Laine P. – International Journal of Testing, 2014
The assessment of higher-education student learning outcomes is an important component in understanding the strengths and weaknesses of academic and general education programs. This study illustrates the application of diagnostic classification models, a burgeoning set of statistical models, in assessing student learning outcomes. To facilitate…
Descriptors: College Outcomes Assessment, Classification, Statistical Analysis, Models
Baghaei, Purya; Aryadoust, Vahid – International Journal of Testing, 2015
Research shows that test method can exert a significant impact on test takers' performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a shared…
Descriptors: Test Format, Item Response Theory, Models, Test Items
Ong, Yoke Mooi; Williams, Julian; Lamprianou, Iasonas – International Journal of Testing, 2015
The purpose of this article is to explore crossing differential item functioning (DIF) in a test drawn from a national examination of mathematics for 11-year-old pupils in England. An empirical dataset was analyzed to explore DIF by gender in a mathematics assessment. A two-step process involving the logistic regression (LR) procedure for…
Descriptors: Mathematics Tests, Gender Differences, Test Bias, Test Items
Gierl, Mark J.; Lai, Hollis – International Journal of Testing, 2012
Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…
Descriptors: Foreign Countries, Psychometrics, Test Construction, Test Items
Lee, Young-Sun; Park, Yoon Soo; Taylan, Didem – International Journal of Testing, 2011
Studies of international mathematics achievement such as the Trends in Mathematics and Science Study (TIMSS) have employed classical test theory and item response theory to rank individuals within a latent ability continuum. Although these approaches have provided insights into comparisons between countries, they have yet to examine how specific…
Descriptors: Mathematics Achievement, Achievement Tests, Models, Cognitive Measurement
Svetina, Dubravka; Gorin, Joanna S.; Tatsuoka, Kikumi K. – International Journal of Testing, 2011
As a construct definition, the current study develops a cognitive model describing the knowledge, skills, and abilities measured by critical reading test items on a high-stakes assessment used for selection decisions in the United States. Additionally, in order to establish generalizability of the construct meaning to other similarly structured…
Descriptors: Reading Tests, Reading Comprehension, Critical Reading, Test Items
Sinharay, Sandip; Johnson, Matthew S. – International Journal of Testing, 2008
"Item models" (LaDuca, Staples, Templeton, & Holzman, 1986) are classes from which it is possible to generate items that are equivalent/isomorphic to other items from the same model (e.g., Bejar, 1996, 2002). They have the potential to produce large numbers of high-quality items at reduced cost. This article introduces data from an…
Descriptors: College Entrance Examinations, Case Studies, Test Items, Models
D'Agostino, Jerome; Karpinski, Aryn; Welsh, Megan – International Journal of Testing, 2011
After a test is developed, most content validation analyses shift from ascertaining domain definition to studying domain representation and relevance because the domain is assumed to be set once a test exists. We present an approach that allows for the examination of alternative domain structures based on extant test items. In our example based on…
Descriptors: Expertise, Test Items, Mathematics Tests, Factor Analysis
Wyse, Adam E.; Mapuranga, Raymond – International Journal of Testing, 2009
Differential item functioning (DIF) analysis is a statistical technique used for ensuring the equity and fairness of educational assessments. This study formulates a new DIF analysis method using the information similarity index (ISI). ISI compares item information functions when data fits the Rasch model. Through simulations and an international…
Descriptors: Test Bias, Evaluation Methods, Test Items, Educational Assessment
Previous Page | Next Page ยป
Pages: 1 | 2