Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 9 |
Since 2016 (last 10 years) | 20 |
Since 2006 (last 20 years) | 37 |
Descriptor
Markov Processes | 46 |
Test Items | 46 |
Monte Carlo Methods | 38 |
Item Response Theory | 30 |
Models | 25 |
Bayesian Statistics | 23 |
Computation | 13 |
Reaction Time | 12 |
Accuracy | 11 |
Goodness of Fit | 10 |
Computer Assisted Testing | 7 |
More ▼ |
Source
Author
Jiao, Hong | 4 |
Sinharay, Sandip | 4 |
Chang, Hua-Hua | 2 |
Douglas, Jeffrey A. | 2 |
Johnson, Matthew S. | 2 |
Luo, Yong | 2 |
Man, Kaiwen | 2 |
Mislevy, Robert J. | 2 |
Patz, Richard J. | 2 |
Wang, Shudong | 2 |
Yao, Lihua | 2 |
More ▼ |
Publication Type
Journal Articles | 38 |
Reports - Research | 31 |
Reports - Evaluative | 6 |
Dissertations/Theses -… | 4 |
Reports - Descriptive | 4 |
Collected Works - Proceedings | 1 |
Numerical/Quantitative Data | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Secondary Education | 5 |
Higher Education | 4 |
Postsecondary Education | 4 |
Junior High Schools | 2 |
Middle Schools | 2 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 4 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
High Schools | 1 |
More ▼ |
Audience
Location
Taiwan | 2 |
Botswana | 1 |
Canada | 1 |
Chile | 1 |
Georgia Republic | 1 |
Germany | 1 |
Malaysia | 1 |
Norway | 1 |
Philippines | 1 |
Poland | 1 |
Russia | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lei Guo; Wenjie Zhou; Xiao Li – Journal of Educational and Behavioral Statistics, 2024
The testlet design is very popular in educational and psychological assessments. This article proposes a new cognitive diagnosis model, the multiple-choice cognitive diagnostic testlet (MC-CDT) model for tests using testlets consisting of MC items. The MC-CDT model uses the original examinees' responses to MC items instead of dichotomously scored…
Descriptors: Multiple Choice Tests, Diagnostic Tests, Accuracy, Computer Software
Xiao, Yue; He, Qiwei; Veldkamp, Bernard; Liu, Hongyun – Journal of Computer Assisted Learning, 2021
The response process of problem-solving items contains rich information about respondents' behaviours and cognitive process in the digital tasks, while the information extraction is a big challenge. The aim of the study is to use a data-driven approach to explore the latent states and state transitions underlying problem-solving process to reflect…
Descriptors: Problem Solving, Competence, Markov Processes, Test Wiseness
Yu, Albert; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2023
We propose a new item response theory growth model with item-specific learning parameters, or ISLP, and two variations of this model. In the ISLP model, either items or blocks of items have their own learning parameters. This model may be used to improve the efficiency of learning in a formative assessment. We show ways that the ISLP model's…
Descriptors: Item Response Theory, Learning, Markov Processes, Monte Carlo Methods
Lozano, José H.; Revuelta, Javier – Educational and Psychological Measurement, 2023
The present paper introduces a general multidimensional model to measure individual differences in learning within a single administration of a test. Learning is assumed to result from practicing the operations involved in solving the items. The model accounts for the possibility that the ability to learn may manifest differently for correct and…
Descriptors: Bayesian Statistics, Learning Processes, Test Items, Item Analysis
Sinharay, Sandip; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
Response time models (RTMs) are of increasing interest in educational and psychological testing. This article focuses on the lognormal model for response times, which is one of the most popular RTMs. Several existing statistics for testing normality and the fit of factor analysis models are repurposed for testing the fit of the lognormal model. A…
Descriptors: Educational Testing, Psychological Testing, Goodness of Fit, Factor Analysis
Sinharay, Sandip; van Rijn, Peter – Grantee Submission, 2020
Response-time models are of increasing interest in educational and psychological testing. This paper focuses on the lognormal model for response times (van der Linden, 2006), which is one of the most popular response-time models. Several existing statistics for testing normality and the fit of factor-analysis models are repurposed for testing the…
Descriptors: Educational Testing, Psychological Testing, Goodness of Fit, Factor Analysis
Luo, Yong; Liang, Xinya – Measurement: Interdisciplinary Research and Perspectives, 2019
Current methods that simultaneously model differential testlet functioning (DTLF) and differential item functioning (DIF) constrain the variances of latent ability and testlet effects to be equal between the focal and the reference groups. Such a constraint can be stringent and unrealistic with real data. In this study, we propose a multigroup…
Descriptors: Test Items, Item Response Theory, Test Bias, Models
Qiao, Xin; Jiao, Hong – Journal of Educational Measurement, 2021
This study proposes explanatory cognitive diagnostic model (CDM) jointly incorporating responses and response times (RTs) with the inclusion of item covariates related to both item responses and RTs. The joint modeling of item responses and RTs intends to provide more information for cognitive diagnosis while item covariates can be used to predict…
Descriptors: Cognitive Measurement, Models, Reaction Time, Test Items
Patel, Nirmal; Sharma, Aditya; Shah, Tirth; Lomas, Derek – Journal of Educational Data Mining, 2021
Process Analysis is an emerging approach to discover meaningful knowledge from temporal educational data. The study presented in this paper shows how we used Process Analysis methods on the National Assessment of Educational Progress (NAEP) test data for modeling and predicting student test-taking behavior. Our process-oriented data exploration…
Descriptors: Learning Analytics, National Competency Tests, Evaluation Methods, Prediction
Zhan, Peida; Jiao, Hong; Man, Kaiwen; Wang, Lijun – Journal of Educational and Behavioral Statistics, 2019
In this article, we systematically introduce the just another Gibbs sampler (JAGS) software program to fit common Bayesian cognitive diagnosis models (CDMs) including the deterministic inputs, noisy "and" gate model; the deterministic inputs, noisy "or" gate model; the linear logistic model; the reduced reparameterized unified…
Descriptors: Bayesian Statistics, Computer Software, Models, Test Items
Ross, Matthew M.; Wright, A. Michelle – Journal of Education for Business, 2022
We use Markov chain Monte Carlo (MCMC) analysis to construct a three-question math quiz to assess key skills needed for introductory finance. We begin with data collected from a ten-question criterion-referenced math quiz given to 314 undergraduates on the first day of class. MCMC indicates the top three questions for predicting overall course…
Descriptors: Mathematics Tests, Markov Processes, Monte Carlo Methods, Introductory Courses
Trendtel, Matthias; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2021
A multidimensional Bayesian item response model is proposed for modeling item position effects. The first dimension corresponds to the ability that is to be measured; the second dimension represents a factor that allows for individual differences in item position effects called persistence. This model allows for nonlinear item position effects on…
Descriptors: Bayesian Statistics, Item Response Theory, Test Items, Test Format
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2019
With the development of technology-enhanced learning platforms, eye-tracking biometric indicators can be recorded simultaneously with students item responses. In the current study, visual fixation, an essential eye-tracking indicator, is modeled to reflect the degree of test engagement when a test taker solves a set of test questions. Three…
Descriptors: Test Items, Eye Movements, Models, Regression (Statistics)
Luo, Yong; Dimitrov, Dimiter M. – Educational and Psychological Measurement, 2019
Plausible values can be used to either estimate population-level statistics or compute point estimates of latent variables. While it is well known that five plausible values are usually sufficient for accurate estimation of population-level statistics in large-scale surveys, the minimum number of plausible values needed to obtain accurate latent…
Descriptors: Item Response Theory, Monte Carlo Methods, Markov Processes, Outcome Measures
Fox, Jean-Paul; Marianti, Sukaesi – Journal of Educational Measurement, 2017
Response accuracy and response time data can be analyzed with a joint model to measure ability and speed of working, while accounting for relationships between item and person characteristics. In this study, person-fit statistics are proposed for joint models to detect aberrant response accuracy and/or response time patterns. The person-fit tests…
Descriptors: Accuracy, Reaction Time, Statistics, Test Items