Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 21 |
Descriptor
Accuracy | 21 |
Markov Processes | 21 |
Monte Carlo Methods | 21 |
Bayesian Statistics | 14 |
Models | 13 |
Computation | 11 |
Item Response Theory | 10 |
Test Items | 10 |
Comparative Analysis | 6 |
Correlation | 5 |
Goodness of Fit | 5 |
More ▼ |
Source
Author
Chang, Hua-Hua | 2 |
Jiao, Hong | 2 |
Revuelta, Javier | 2 |
Wang, Shudong | 2 |
Batley, Prathiba Natesan | 1 |
Bezirhan, Ummugul | 1 |
Cohen, Allan S. | 1 |
Dillenbourg, Pierre | 1 |
Douglas, Jeffrey A. | 1 |
Fan, Zhewen | 1 |
Faucon, Louis | 1 |
More ▼ |
Publication Type
Reports - Research | 17 |
Journal Articles | 15 |
Dissertations/Theses -… | 4 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
United States Medical… | 1 |
What Works Clearinghouse Rating
Lei Guo; Wenjie Zhou; Xiao Li – Journal of Educational and Behavioral Statistics, 2024
The testlet design is very popular in educational and psychological assessments. This article proposes a new cognitive diagnosis model, the multiple-choice cognitive diagnostic testlet (MC-CDT) model for tests using testlets consisting of MC items. The MC-CDT model uses the original examinees' responses to MC items instead of dichotomously scored…
Descriptors: Multiple Choice Tests, Diagnostic Tests, Accuracy, Computer Software
Shu, Tian; Luo, Guanzhong; Luo, Zhaosheng; Yu, Xiaofeng; Guo, Xiaojun; Li, Yujun – Journal of Educational and Behavioral Statistics, 2023
Cognitive diagnosis models (CDMs) are the statistical framework for cognitive diagnostic assessment in education and psychology. They generally assume that subjects' latent attributes are dichotomous--mastery or nonmastery, which seems quite deterministic. As an alternative to dichotomous attribute mastery, attention is drawn to the use of a…
Descriptors: Cognitive Measurement, Models, Diagnostic Tests, Accuracy
Jang, Yoonsun; Cohen, Allan S. – Educational and Psychological Measurement, 2020
A nonconverged Markov chain can potentially lead to invalid inferences about model parameters. The purpose of this study was to assess the effect of a nonconverged Markov chain on the estimation of parameters for mixture item response theory models using a Markov chain Monte Carlo algorithm. A simulation study was conducted to investigate the…
Descriptors: Markov Processes, Item Response Theory, Accuracy, Inferences
Lozano, José H.; Revuelta, Javier – Educational and Psychological Measurement, 2023
The present paper introduces a general multidimensional model to measure individual differences in learning within a single administration of a test. Learning is assumed to result from practicing the operations involved in solving the items. The model accounts for the possibility that the ability to learn may manifest differently for correct and…
Descriptors: Bayesian Statistics, Learning Processes, Test Items, Item Analysis
Bezirhan, Ummugul; von Davier, Matthias; Grabovsky, Irina – Educational and Psychological Measurement, 2021
This article presents a new approach to the analysis of how students answer tests and how they allocate resources in terms of time on task and revisiting previously answered questions. Previous research has shown that in high-stakes assessments, most test takers do not end the testing session early, but rather spend all of the time they were…
Descriptors: Response Style (Tests), Accuracy, Reaction Time, Ability
Batley, Prathiba Natesan; Minka, Tom; Hedges, Larry Vernon – Grantee Submission, 2020
Immediacy is one of the necessary criteria to show strong evidence of treatment effect in single case experimental designs (SCEDs). With the exception of Natesan and Hedges (2017) no inferential statistical tool has been used to demonstrate or quantify it until now. We investigate and quantify immediacy by treating the change-points between the…
Descriptors: Bayesian Statistics, Monte Carlo Methods, Statistical Inference, Markov Processes
Fox, Jean-Paul; Marianti, Sukaesi – Journal of Educational Measurement, 2017
Response accuracy and response time data can be analyzed with a joint model to measure ability and speed of working, while accounting for relationships between item and person characteristics. In this study, person-fit statistics are proposed for joint models to detect aberrant response accuracy and/or response time patterns. The person-fit tests…
Descriptors: Accuracy, Reaction Time, Statistics, Test Items
Yildiz, Mustafa – ProQuest LLC, 2017
Student misconceptions have been studied for decades from a curricular/instructional perspective and from the assessment/test level perspective. Numerous misconception assessment tools have been developed in order to measure students' misconceptions relative to the correct content. Often, these tools are used to make a variety of educational…
Descriptors: Misconceptions, Students, Item Response Theory, Models
Faucon, Louis; Kidzinski, Lukasz; Dillenbourg, Pierre – International Educational Data Mining Society, 2016
Large-scale experiments are often expensive and time consuming. Although Massive Online Open Courses (MOOCs) provide a solid and consistent framework for learning analytics, MOOC practitioners are still reluctant to risk resources in experiments. In this study, we suggest a methodology for simulating MOOC students, which allow estimation of…
Descriptors: Markov Processes, Monte Carlo Methods, Bayesian Statistics, Online Courses
Martin-Fernandez, Manuel; Revuelta, Javier – Psicologica: International Journal of Methodology and Experimental Psychology, 2017
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Descriptors: Bayesian Statistics, Item Response Theory, Models, Comparative Analysis
Meng, Xiang-Bin; Tao, Jian; Chang, Hua-Hua – Journal of Educational Measurement, 2015
The assumption of conditional independence between the responses and the response times (RTs) for a given person is common in RT modeling. However, when the speed of a test taker is not constant, this assumption will be violated. In this article we propose a conditional joint model for item responses and RTs, which incorporates a covariance…
Descriptors: Reaction Time, Test Items, Accuracy, Models
Kuo, Tzu-Chun – ProQuest LLC, 2015
Item response theory (IRT) has gained an increasing popularity in large-scale educational and psychological testing situations because of its theoretical advantages over classical test theory. Unidimensional graded response models (GRMs) are useful when polytomous response items are designed to measure a unified latent trait. They are limited in…
Descriptors: Item Response Theory, Bayesian Statistics, Computation, Models
Huang, Hung-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Descriptors: Models, Guessing (Tests), Probability, Ability
Wu, Yi-Fang – ProQuest LLC, 2015
Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and…
Descriptors: Item Response Theory, Test Items, Accuracy, Computation
Mossman, Douglas; Wygant, Dustin B.; Gervais, Roger O. – Psychological Assessment, 2012
Psychologists frequently use symptom validity tests (SVTs) to help determine whether evaluees' test performance or reported symptoms accurately represent their true functioning and capability. Most studies evaluating the accuracy of SVTs have used either known-group comparisons or simulation designs, but these approaches have well-known…
Descriptors: Accuracy, Classification, Validity, Psychological Testing
Previous Page | Next Page »
Pages: 1 | 2