Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 24 |
Descriptor
Models | 31 |
Scoring | 31 |
Simulation | 23 |
Item Response Theory | 14 |
Test Items | 13 |
Computer Simulation | 9 |
Scores | 8 |
Comparative Analysis | 7 |
Computer Assisted Testing | 7 |
Evaluation Methods | 7 |
Performance Based Assessment | 6 |
More ▼ |
Source
Author
Barnes, Tiffany, Ed. | 2 |
Feng, Mingyu, Ed. | 2 |
Aybek, Eren Can | 1 |
Bejar, Isaac I. | 1 |
Bradshaw, Laine P. | 1 |
Breyer, F. Jay | 1 |
Burket, George | 1 |
Cai, Li | 1 |
Carlson, James E. | 1 |
Chen, Li-Sue | 1 |
Chi, Min, Ed. | 1 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
Program for International… | 1 |
Torrance Tests of Creative… | 1 |
What Works Clearinghouse Rating
Joakim Wallmark; James O. Ramsay; Juan Li; Marie Wiberg – Journal of Educational and Behavioral Statistics, 2024
Item response theory (IRT) models the relationship between the possible scores on a test item against a test taker's attainment of the latent trait that the item is intended to measure. In this study, we compare two models for tests with polytomously scored items: the optimal scoring (OS) model, a nonparametric IRT model based on the principles of…
Descriptors: Item Response Theory, Test Items, Models, Scoring
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
James Soland; Megan Kuhfeld – Annenberg Institute for School Reform at Brown University, 2020
Survey respondents use different response styles when they use the categories of the Likert scale differently despite having the same true score on the construct of interest. For example, respondents may be more likely to use the extremes of the response scale independent of their true score. Research already shows that differing response styles…
Descriptors: Social Emotional Learning, Scores, Likert Scales, Surveys
Bradshaw, Laine P.; Madison, Matthew J. – International Journal of Testing, 2016
In item response theory (IRT), the invariance property states that item parameter estimates are independent of the examinee sample, and examinee ability estimates are independent of the test items. While this property has long been established and understood by the measurement community for IRT models, the same cannot be said for diagnostic…
Descriptors: Classification, Models, Simulation, Psychometrics
Aybek, Eren Can; Demirtasli, R. Nukhet – International Journal of Research in Education and Science, 2017
This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Yang, Ji Seung; Zheng, Xiaying – Journal of Educational and Behavioral Statistics, 2018
The purpose of this article is to introduce and review the capability and performance of the Stata item response theory (IRT) package that is available from Stata v.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the IRT package from…
Descriptors: Item Response Theory, Item Analysis, Computer Software, Statistical Analysis
Falk, Carl F.; Cai, Li – Grantee Submission, 2015
In this paper, we present a flexible full-information approach to modeling multiple userdefined response styles across multiple constructs of interest. The model is based on a novel parameterization of the multidimensional nominal response model that separates estimation of overall item slopes from the scoring functions (indicating the order of…
Descriptors: Response Style (Tests), Item Response Theory, Outcome Measures, Models
Gautam, Dipesh; Swiecki, Zachari; Shaffer, David W.; Graesser, Arthur C.; Rus, Vasile – International Educational Data Mining Society, 2017
Virtual internships are online simulations of professional practice where students play the role of interns at a fictional company. During virtual internships, participants complete activities and then submit write-ups in the form of short answers, digital notebook entries. Prior work used classifiers trained on participant data to automatically…
Descriptors: Computer Simulation, Internship Programs, Semantics, College Students
Steiner, Peter M.; Kim, Yongnam – Society for Research on Educational Effectiveness, 2014
In contrast to randomized experiments, the estimation of unbiased treatment effects from observational data requires an analysis that conditions on all confounding covariates. Conditioning on covariates can be done via standard parametric regression techniques or nonparametric matching like propensity score (PS) matching. The regression or…
Descriptors: Observation, Research Methodology, Test Bias, Regression (Statistics)
Shin, Hyo Jeong – ProQuest LLC, 2015
This dissertation is comprised of three papers that propose and apply psychometric models to deal with complexities and challenges in large-scale assessments, focusing on modeling rater effects and complex learning progressions. In particular, three papers investigate extensions and applications of multilevel and multidimensional item response…
Descriptors: Item Response Theory, Psychometrics, Models, Measurement
van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas – Applied Psychological Measurement, 2011
This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…
Descriptors: Simulation, Reliability, Measurement, Psychology
Suh, Youngsuk; Cho, Sun-Joo; Wollack, James A. – Journal of Educational Measurement, 2012
In the presence of test speededness, the parameter estimates of item response theory models can be poorly estimated due to conditional dependencies among items, particularly for end-of-test items (i.e., speeded items). This article conducted a systematic comparison of five-item calibration procedures--a two-parameter logistic (2PL) model, a…
Descriptors: Response Style (Tests), Timed Tests, Test Items, Item Response Theory
Dobria, Lidia – ProQuest LLC, 2011
Performance assessments rely on the expert judgment of raters for the measurement of the quality of responses, and raters unavoidably introduce error in the scoring process. Defined as the tendency of a rater to assign higher or lower ratings, on average, than those assigned by other raters, even after accounting for differences in examinee…
Descriptors: Simulation, Performance Based Assessment, Performance Tests, Scoring
Wothke, Werner; Burket, George; Chen, Li-Sue; Gao, Furong; Shu, Lianghua; Chia, Mike – Journal of Educational and Behavioral Statistics, 2011
It has been known for some time that item response theory (IRT) models may exhibit a likelihood function of a respondent's ability which may have multiple modes, flat modes, or both. These conditions, often associated with guessing of multiple-choice (MC) questions, can introduce uncertainty and bias to ability estimation by maximum likelihood…
Descriptors: Educational Assessment, Item Response Theory, Computation, Maximum Likelihood Statistics
Feng, Mingyu, Ed.; Käser, Tanja, Ed.; Talukdar, Partha, Ed. – International Educational Data Mining Society, 2023
The Indian Institute of Science is proud to host the fully in-person sixteenth iteration of the International Conference on Educational Data Mining (EDM) during July 11-14, 2023. EDM is the annual flagship conference of the International Educational Data Mining Society. The theme of this year's conference is "Educational data mining for…
Descriptors: Information Retrieval, Data Analysis, Computer Assisted Testing, Cheating