Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 5 |
Descriptor
Difficulty Level | 7 |
Monte Carlo Methods | 7 |
Test Items | 7 |
Test Length | 7 |
Item Response Theory | 5 |
Correlation | 3 |
Guessing (Tests) | 3 |
Sample Size | 3 |
Accuracy | 2 |
Comparative Analysis | 2 |
Computation | 2 |
More ▼ |
Source
Applied Psychological… | 1 |
Asia Pacific Education Review | 1 |
Educational Measurement:… | 1 |
Journal of Educational and… | 1 |
ProQuest LLC | 1 |
Author
Arikan, Serkan | 1 |
Aybek, Eren Can | 1 |
Bazaldua, Diego A. Luna | 1 |
Douglas, Jeffrey A. | 1 |
Fellers, Lauren | 1 |
Finch, Holmes | 1 |
Hisama, Kay K. | 1 |
Keller, Bryan | 1 |
Lee, Young-Sun | 1 |
Samejima, Fumiko | 1 |
Wu, Yi-Fang | 1 |
More ▼ |
Publication Type
Reports - Research | 6 |
Journal Articles | 4 |
Dissertations/Theses -… | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Yu, Albert; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2023
We propose a new item response theory growth model with item-specific learning parameters, or ISLP, and two variations of this model. In the ISLP model, either items or blocks of items have their own learning parameters. This model may be used to improve the efficiency of learning in a formative assessment. We show ways that the ISLP model's…
Descriptors: Item Response Theory, Learning, Markov Processes, Monte Carlo Methods
Arikan, Serkan; Aybek, Eren Can – Educational Measurement: Issues and Practice, 2022
Many scholars compared various item discrimination indices in real or simulated data. Item discrimination indices, such as item-total correlation, item-rest correlation, and IRT item discrimination parameter, provide information about individual differences among all participants. However, there are tests that aim to select a very limited number…
Descriptors: Monte Carlo Methods, Item Analysis, Correlation, Individual Differences
Bazaldua, Diego A. Luna; Lee, Young-Sun; Keller, Bryan; Fellers, Lauren – Asia Pacific Education Review, 2017
The performance of various classical test theory (CTT) item discrimination estimators has been compared in the literature using both empirical and simulated data, resulting in mixed results regarding the preference of some discrimination estimators over others. This study analyzes the performance of various item discrimination estimators in CTT:…
Descriptors: Test Items, Monte Carlo Methods, Item Response Theory, Correlation
Wu, Yi-Fang – ProQuest LLC, 2015
Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…
Descriptors: Item Response Theory, Test Items, Accuracy, Computation
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Samejima, Fumiko – 1986
Item analysis data fitting the normal ogive model were simulated in order to investigate the problems encountered when applying the three-parameter logistic model. Binary item tests containing 10 and 35 items were created, and Monte Carlo methods simulated the responses of 2,000 and 500 examinees. Item parameters were obtained using Logist 5.…
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Item Analysis
Hisama, Kay K.; And Others – 1977
The optimal test length, using predictive validity as a criterion, depends on two major conditions: the appropriate item-difficulty rather than the total number of items, and the method used in scoring the test. These conclusions were reached when responses to a 100-item multi-level test of reading comprehension from 136 non-native speakers of…
Descriptors: College Students, Difficulty Level, English (Second Language), Foreign Students