Publication Date
In 2025 | 2 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 8 |
Descriptor
Accuracy | 8 |
Bayesian Statistics | 8 |
Computer Assisted Testing | 4 |
Item Response Theory | 4 |
Models | 3 |
Test Items | 3 |
Algorithms | 2 |
Computation | 2 |
Diagnostic Tests | 2 |
Efficiency | 2 |
Evaluation Methods | 2 |
More ▼ |
Source
Journal of Educational and… | 8 |
Author
Wang, Chun | 2 |
Chang, Hua-Hua | 1 |
Chen, Ping | 1 |
Douglas, Jeffrey A. | 1 |
Fan, Zhewen | 1 |
Guo, Xiaojun | 1 |
Hennessy, Emily A. | 1 |
Isham, Steven | 1 |
Jean-Paul Fox | 1 |
Kuncel, Nathan | 1 |
Li, Yujun | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Research | 7 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Yang Du; Susu Zhang – Journal of Educational and Behavioral Statistics, 2025
Item compromise has long posed challenges in educational measurement, jeopardizing both test validity and test security of continuous tests. Detecting compromised items is therefore crucial to address this concern. The present literature on compromised item detection reveals two notable gaps: First, the majority of existing methods are based upon…
Descriptors: Item Response Theory, Item Analysis, Bayesian Statistics, Educational Assessment
Shu, Tian; Luo, Guanzhong; Luo, Zhaosheng; Yu, Xiaofeng; Guo, Xiaojun; Li, Yujun – Journal of Educational and Behavioral Statistics, 2023
Cognitive diagnosis models (CDMs) are the statistical framework for cognitive diagnostic assessment in education and psychology. They generally assume that subjects' latent attributes are dichotomous--mastery or nonmastery, which seems quite deterministic. As an alternative to dichotomous attribute mastery, attention is drawn to the use of a…
Descriptors: Cognitive Measurement, Models, Diagnostic Tests, Accuracy
Wang, Chun; Xu, Gongjun; Shang, Zhuoran; Kuncel, Nathan – Journal of Educational and Behavioral Statistics, 2018
The modern web-based technology greatly popularizes computer-administered testing, also known as online testing. When these online tests are administered continuously within a certain "testing window," many items are likely to be exposed and compromised, posing a type of test security concern. In addition, if the testing time is limited,…
Descriptors: Computer Assisted Testing, Cheating, Guessing (Tests), Item Response Theory
Polanin, Joshua R.; Hennessy, Emily A.; Tanner-Smith, Emily E. – Journal of Educational and Behavioral Statistics, 2017
Meta-analysis is a statistical technique that allows an analyst to synthesize effect sizes from multiple primary studies. To estimate meta-analysis models, the open-source statistical environment R is quickly becoming a popular choice. The meta-analytic community has contributed to this growth by developing numerous packages specific to…
Descriptors: Meta Analysis, Open Source Technology, Computer Software, Effect Size
Chen, Ping – Journal of Educational and Behavioral Statistics, 2017
Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…
Descriptors: Test Items, Item Response Theory, Test Construction, Adaptive Testing
Zwick, Rebecca; Ye, Lei; Isham, Steven – Journal of Educational and Behavioral Statistics, 2012
This study demonstrates how the stability of Mantel-Haenszel (MH) DIF (differential item functioning) methods can be improved by integrating information across multiple test administrations using Bayesian updating (BU). The authors conducted a simulation that showed that this approach, which is based on earlier work by Zwick, Thayer, and Lewis,…
Descriptors: Test Bias, Computation, Statistical Analysis, Bayesian Statistics
Wang, Chun; Fan, Zhewen; Chang, Hua-Hua; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2013
The item response times (RTs) collected from computerized testing represent an underutilized type of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. Current models for RTs mainly focus on parametric models, which have the…
Descriptors: Reaction Time, Computer Assisted Testing, Test Items, Accuracy