NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison J.; Leventhal, Brian C.; Ezike, Nnamdi C. – Measurement: Interdisciplinary Research and Perspectives, 2020
Data simulation and Monte Carlo simulation studies are important skills for researchers and practitioners of educational and psychological measurement, but there are few resources on the topic specific to item response theory. Even fewer resources exist on the statistical software techniques to implement simulation studies. This article presents…
Descriptors: Monte Carlo Methods, Item Response Theory, Simulation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Andersson, Björn – Journal of Educational Measurement, 2016
In observed-score equipercentile equating, the goal is to make scores on two scales or tests measuring the same construct comparable by matching the percentiles of the respective score distributions. If the tests consist of different items with multiple categories for each item, a suitable model for the responses is a polytomous item response…
Descriptors: Equated Scores, Item Response Theory, Error of Measurement, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Tay, Louis; Drasgow, Fritz – Educational and Psychological Measurement, 2012
Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…
Descriptors: Test Length, Monte Carlo Methods, Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Kruyen, Peter M.; Emons, Wilco H. M.; Sijtsma, Klaas – International Journal of Testing, 2012
Personnel selection shows an enduring need for short stand-alone tests consisting of, say, 5 to 15 items. Despite their efficiency, short tests are more vulnerable to measurement error than longer test versions. Consequently, the question arises to what extent reducing test length deteriorates decision quality due to increased impact of…
Descriptors: Measurement, Personnel Selection, Decision Making, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Wilson, Mark – Educational and Psychological Measurement, 2011
This study elaborates the Rasch differential item functioning (DIF) model formulation under the marginal maximum likelihood estimation context. Also, the Rasch DIF model performance was examined and compared with the Mantel-Haenszel (MH) procedure in small sample and short test length conditions through simulations. The theoretically known…
Descriptors: Test Bias, Test Length, Statistical Inference, Geometric Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Finkelman, Matthew David – Applied Psychological Measurement, 2010
In sequential mastery testing (SMT), assessment via computer is used to classify examinees into one of two mutually exclusive categories. Unlike paper-and-pencil tests, SMT has the capability to use variable-length stopping rules. One approach to shortening variable-length tests is stochastic curtailment, which halts examination if the probability…
Descriptors: Mastery Tests, Computer Assisted Testing, Adaptive Testing, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Finkelman, Matthew – Journal of Educational and Behavioral Statistics, 2008
Sequential mastery testing (SMT) has been researched as an efficient alternative to paper-and-pencil testing for pass/fail examinations. One popular method for determining when to cease examination in SMT is the truncated sequential probability ratio test (TSPRT). This article introduces the application of stochastic curtailment in SMT to shorten…
Descriptors: Mastery Tests, Sequential Approach, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Yuan-chin Ivan – Psychometrika, 2005
In this paper, we apply sequential one-sided confidence interval estimation procedures with beta-protection to adaptive mastery testing. The procedures of fixed-width and fixed proportional accuracy confidence interval estimation can be viewed as extensions of one-sided confidence interval procedures. It can be shown that the adaptive mastery…
Descriptors: Mastery Tests, Probability, Intervals, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Hendrawan, Irene; Glas, Cees A. W.; Meijer, Rob R. – Applied Psychological Measurement, 2005
The effect of person misfit to an item response theory model on a mastery/nonmastery decision was investigated. Furthermore, it was investigated whether the classification precision can be improved by identifying misfitting respondents using person-fit statistics. A simulation study was conducted to investigate the probability of a correct…
Descriptors: Probability, Statistics, Test Length, Simulation
Epstein, Kenneth I.; Steinheiser, Frederick H., Jr. – 1978
A multiparameter, programmable model was developed to examine the interactive influence of certain parameters on the probability of deciding that an examinee had attained a specified degree of mastery. It was applied within the simulated context of performance testing of military trainees. These parameters included: (1) the number of assumed…
Descriptors: Academic Ability, Bayesian Statistics, Cutting Scores, Hypothesis Testing
Steinheiser, Frederick H., Jr. – 1976
A computer simulation of Bayes' Theorem was conducted in order to determine the probability that an examinee was a master conditional upon his test score. The inputs were: number of mastery states assumed, test length, prior expectation of masters in the examinee population, and conditional probability of a master getting a randomly selected test…
Descriptors: Bayesian Statistics, Classification, Computer Programs, Criterion Referenced Tests
Nandakumar, Ratna; Yu, Feng – 1994
DIMTEST is a statistical test procedure for assessing essential unidimensionality of binary test item responses. The test statistic T used for testing the null hypothesis of essential unidimensionality is a nonparametric statistic. That is, there is no particular parametric distribution assumed for the underlying ability distribution or for the…
Descriptors: Ability, Content Validity, Correlation, Nonparametric Statistics
Kim, Seock-Ho; And Others – 1992
Hierarchical Bayes procedures were compared for estimating item and ability parameters in item response theory. Simulated data sets from the two-parameter logistic model were analyzed using three different hierarchical Bayes procedures: (1) the joint Bayesian with known hyperparameters (JB1); (2) the joint Bayesian with information hyperpriors…
Descriptors: Ability, Bayesian Statistics, Comparative Analysis, Equations (Mathematics)