Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 7 |
Descriptor
Test Length | 7 |
Test Items | 5 |
Adaptive Testing | 4 |
Item Response Theory | 4 |
Computer Assisted Testing | 2 |
Difficulty Level | 2 |
Error of Measurement | 2 |
Identification | 2 |
Sample Size | 2 |
Accuracy | 1 |
Adolescents | 1 |
More ▼ |
Source
Journal of Educational and… | 7 |
Author
Cheng, Ying | 1 |
DeMars, Christine E. | 1 |
Diao, Qi | 1 |
Douglas, Jeffrey A. | 1 |
Finkelman, Matthew | 1 |
Hong, Maxwell | 1 |
Lang, Joseph B. | 1 |
Patton, Jeffrey M. | 1 |
Wang, Chun | 1 |
Xiong, Xinhui | 1 |
Yu, Albert | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 7 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
National Longitudinal Study… | 1 |
What Works Clearinghouse Rating
Lang, Joseph B. – Journal of Educational and Behavioral Statistics, 2023
This article is concerned with the statistical detection of copying on multiple-choice exams. As an alternative to existing permutation- and model-based copy-detection approaches, a simple randomization p-value (RP) test is proposed. The RP test, which is based on an intuitive match-score statistic, makes no assumptions about the distribution of…
Descriptors: Identification, Cheating, Multiple Choice Tests, Item Response Theory
Yu, Albert; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2023
We propose a new item response theory growth model with item-specific learning parameters, or ISLP, and two variations of this model. In the ISLP model, either items or blocks of items have their own learning parameters. This model may be used to improve the efficiency of learning in a formative assessment. We show ways that the ISLP model's…
Descriptors: Item Response Theory, Learning, Markov Processes, Monte Carlo Methods
Patton, Jeffrey M.; Cheng, Ying; Hong, Maxwell; Diao, Qi – Journal of Educational and Behavioral Statistics, 2019
In psychological and survey research, the prevalence and serious consequences of careless responses from unmotivated participants are well known. In this study, we propose to iteratively detect careless responders and cleanse the data by removing their responses. The careless responders are detected using person-fit statistics. In two simulation…
Descriptors: Test Items, Response Style (Tests), Identification, Computation
van der Linden, Wim J.; Xiong, Xinhui – Journal of Educational and Behavioral Statistics, 2013
Two simple constraints on the item parameters in a response--time model are proposed to control the speededness of an adaptive test. As the constraints are additive, they can easily be included in the constraint set for a shadow-test approach (STA) to adaptive testing. Alternatively, a simple heuristic is presented to control speededness in plain…
Descriptors: Adaptive Testing, Heuristics, Test Length, Reaction Time
Wang, Chun – Journal of Educational and Behavioral Statistics, 2014
Many latent traits in social sciences display a hierarchical structure, such as intelligence, cognitive ability, or personality. Usually a second-order factor is linearly related to a group of first-order factors (also called domain abilities in cognitive ability measures), and the first-order factors directly govern the actual item responses.…
Descriptors: Measurement, Accuracy, Item Response Theory, Adaptive Testing
DeMars, Christine E. – Journal of Educational and Behavioral Statistics, 2009
The Mantel-Haenszel (MH) and logistic regression (LR) differential item functioning (DIF) procedures have inflated Type I error rates when there are large mean group differences, short tests, and large sample sizes.When there are large group differences in mean score, groups matched on the observed number-correct score differ on true score,…
Descriptors: Regression (Statistics), Test Bias, Error of Measurement, True Scores
Finkelman, Matthew – Journal of Educational and Behavioral Statistics, 2008
Sequential mastery testing (SMT) has been researched as an efficient alternative to paper-and-pencil testing for pass/fail examinations. One popular method for determining when to cease examination in SMT is the truncated sequential probability ratio test (TSPRT). This article introduces the application of stochastic curtailment in SMT to shorten…
Descriptors: Mastery Tests, Sequential Approach, Computer Assisted Testing, Adaptive Testing