Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 32 |
Descriptor
Simulation | 32 |
Test Items | 32 |
Item Response Theory | 23 |
Sample Size | 10 |
Test Length | 10 |
Comparative Analysis | 9 |
Models | 9 |
Educational Testing | 7 |
Measurement | 7 |
Test Bias | 7 |
Error of Measurement | 6 |
More ▼ |
Source
ProQuest LLC | 32 |
Author
Carroll, Ian A. | 1 |
Carvajal-Espinoza, Jorge E. | 1 |
Chen, Tzu-An | 1 |
Deng, Nina | 1 |
Derek Sauder | 1 |
Diakow, Ronli Phyllis | 1 |
Esen, Ayse | 1 |
Fager, Meghan L. | 1 |
He, Yong | 1 |
Keiffer, Elizabeth Ann | 1 |
Kroopnick, Marc Howard | 1 |
More ▼ |
Publication Type
Dissertations/Theses -… | 32 |
Education Level
Elementary Secondary Education | 2 |
Elementary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Ozge Ersan Cinar – ProQuest LLC, 2022
In educational tests, a group of questions related to a shared stimulus is called a testlet (e.g., a reading passage with multiple related questions). Use of testlets is very common in educational tests. Additionally, computerized adaptive testing (CAT) is a mode of testing where the test forms are created in real time tailoring to the test…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Educational Testing
Derek Sauder – ProQuest LLC, 2020
The Rasch model is commonly used to calibrate multiple choice items. However, the sample sizes needed to estimate the Rasch model can be difficult to attain (e.g., consider a small testing company trying to pretest new items). With small sample sizes, auxiliary information besides the item responses may improve estimation of the item parameters.…
Descriptors: Item Response Theory, Sample Size, Computation, Test Length
Fager, Meghan L. – ProQuest LLC, 2019
Recent research in multidimensional item response theory has introduced within-item interaction effects between latent dimensions in the prediction of item responses. The objective of this study was to extend this research to bifactor models to include an interaction effect between the general and specific latent variables measured by an item.…
Descriptors: Test Items, Item Response Theory, Factor Analysis, Simulation
Esen, Ayse – ProQuest LLC, 2017
Detecting Differential Item Functioning (DIF) is an early step and very critical to investigate any possible bias between groups (e.g., males vs. females). Many early DIF studies only focused on two-group comparison. However, there are many cases where more than two groups exist: Cross-cultural studies are administered in many countries and any…
Descriptors: Test Bias, Cross Cultural Studies, Ethnicity, Error Patterns
Carroll, Ian A. – ProQuest LLC, 2017
Item exposure control is, relative to adaptive testing, a nascent concept that has emerged only in the last two to three decades on an academic basis as a practical issue in high-stakes computerized adaptive tests. This study aims to implement a new strategy in item exposure control by incorporating the standard error of the ability estimate into…
Descriptors: Test Items, Computer Assisted Testing, Selection, Adaptive Testing
Wang, Keyin – ProQuest LLC, 2017
The comparison of item-level computerized adaptive testing (CAT) and multistage adaptive testing (MST) has been researched extensively (e.g., Kim & Plake, 1993; Luecht et al., 1996; Patsula, 1999; Jodoin, 2003; Hambleton & Xing, 2006; Keng, 2008; Zheng, 2012). Various CAT and MST designs have been investigated and compared under the same…
Descriptors: Comparative Analysis, Computer Assisted Testing, Adaptive Testing, Test Items
Steinkamp, Susan Christa – ProQuest LLC, 2017
For test scores that rely on the accurate estimation of ability via an IRT model, their use and interpretation is dependent upon the assumption that the IRT model fits the data. Examinees who do not put forth full effort in answering test questions, have prior knowledge of test content, or do not approach a test with the intent of answering…
Descriptors: Test Items, Item Response Theory, Scores, Test Wiseness
Lamsal, Sunil – ProQuest LLC, 2015
Different estimation procedures have been developed for the unidimensional three-parameter item response theory (IRT) model. These techniques include the marginal maximum likelihood estimation, the fully Bayesian estimation using Markov chain Monte Carlo simulation techniques, and the Metropolis-Hastings Robbin-Monro estimation. With each…
Descriptors: Item Response Theory, Monte Carlo Methods, Maximum Likelihood Statistics, Markov Processes
Shin, Hyo Jeong – ProQuest LLC, 2015
This dissertation is comprised of three papers that propose and apply psychometric models to deal with complexities and challenges in large-scale assessments, focusing on modeling rater effects and complex learning progressions. In particular, three papers investigate extensions and applications of multilevel and multidimensional item response…
Descriptors: Item Response Theory, Psychometrics, Models, Measurement
Zheng, Chunmei – ProQuest LLC, 2013
Educational and psychological constructs are normally measured by multifaceted dimensions. The measured construct is defined and measured by a set of related subdomains. A bifactor model can accurately describe such data with both the measured construct and the related subdomains. However, a limitation of the bifactor model is the orthogonality…
Descriptors: Educational Testing, Measurement Techniques, Test Items, Models
Xiang, Rui – ProQuest LLC, 2013
A key issue of cognitive diagnostic models (CDMs) is the correct identification of Q-matrix which indicates the relationship between attributes and test items. Previous CDMs typically assumed a known Q-matrix provided by domain experts such as those who developed the questions. However, misspecifications of Q-matrix had been discovered in the past…
Descriptors: Diagnostic Tests, Cognitive Processes, Matrices, Test Items
He, Yong – ProQuest LLC, 2013
Common test items play an important role in equating multiple test forms under the common-item nonequivalent groups design. Inconsistent item parameter estimates among common items can lead to large bias in equated scores for IRT true score equating. Current methods extensively focus on detection and elimination of outlying common items, which…
Descriptors: Test Items, Regression (Statistics), Simulation, Comparative Analysis
Sen, Rohini – ProQuest LLC, 2012
In the last five decades, research on the uses of response time has extended into the field of psychometrics (Schnikpe & Scrams, 1999; van der Linden, 2006; van der Linden, 2007), where interest has centered around the usefulness of response time information in item calibration and person measurement within an item response theory. framework.…
Descriptors: Structural Equation Models, Reaction Time, Item Response Theory, Computation
MacDonald, George T. – ProQuest LLC, 2014
A simulation study was conducted to explore the performance of the linear logistic test model (LLTM) when the relationships between items and cognitive components were misspecified. Factors manipulated included percent of misspecification (0%, 1%, 5%, 10%, and 15%), form of misspecification (under-specification, balanced misspecification, and…
Descriptors: Simulation, Item Response Theory, Models, Test Items
Topczewski, Anna Marie – ProQuest LLC, 2013
Developmental score scales represent the performance of students along a continuum, where as students learn more they move higher along that continuum. Unidimensional item response theory (UIRT) vertical scaling has become a commonly used method to create developmental score scales. Research has shown that UIRT vertical scaling methods can be…
Descriptors: Item Response Theory, Scaling, Scores, Student Development