NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Socha, Alan; DeMars, Christine E. – Applied Psychological Measurement, 2013
The software program DIMTEST can be used to assess the unidimensionality of item scores. The software allows the user to specify a guessing parameter. Using simulated data, the effects of guessing parameter specification for use with the ATFIND procedure for empirically deriving the Assessment Subtest (AT; that is, a subtest composed of items that…
Descriptors: Item Response Theory, Computer Software, Guessing (Tests), Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Chiu, Ting-Wei; Camilli, Gregory – Applied Psychological Measurement, 2013
Guessing behavior is an issue discussed widely with regard to multiple choice tests. Its primary effect is on number-correct scores for examinees at lower levels of proficiency. This is a systematic error or bias, which increases observed test scores. Guessing also can inflate random error variance. Correction or adjustment for guessing formulas…
Descriptors: Item Response Theory, Guessing (Tests), Multiple Choice Tests, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; DeMars, Christine E. – Applied Psychological Measurement, 2009
Attali (2005) recently demonstrated that Cronbach's coefficient [alpha] estimate of reliability for number-right multiple-choice tests will tend to be deflated by speededness, rather than inflated as is commonly believed and taught. Although the methods, findings, and conclusions of Attali (2005) are correct, his article may inadvertently invite a…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Reliability, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M. – Applied Psychological Measurement, 2008
In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…
Descriptors: Simulation, Item Response Theory, Monte Carlo Methods, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; Habing, Brian – Applied Psychological Measurement, 2007
This Monte Carlo study compares the ability of the parametric bootstrap version of DIMTEST with three goodness-of-fit tests calculated from a fitted NOHARM model to detect violations of the assumption of unidimensionality in testing data. The effectiveness of the procedures was evaluated for different numbers of items, numbers of examinees,…
Descriptors: Guessing (Tests), Testing, Statistics, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Martin, Ernesto San; del Pino, Guido; De Boeck, Paul – Applied Psychological Measurement, 2006
An ability-based guessing model is formulated and applied to several data sets regarding educational tests in language and in mathematics. The formulation of the model is such that the probability of a correct guess does not only depend on the item but also on the ability of the individual, weighted with a general discrimination parameter. By so…
Descriptors: Guessing (Tests), Probability, Mathematics Tests, Language Tests
Peer reviewed Peer reviewed
Frary, Robert B. – Applied Psychological Measurement, 1980
Six scoring methods for assigning weights to right or wrong responses according to various instructions given to test takers are analyzed with respect to expected change scores and the effect of various levels of information and misinformation. Three of the methods provide feedback to the test taker. (Author/CTM)
Descriptors: Guessing (Tests), Knowledge Level, Multiple Choice Tests, Scores
Peer reviewed Peer reviewed
Garcia-Perez, Miguel A.; Frary, Robert B. – Applied Psychological Measurement, 1989
Simulation techniques were used to generate conventional test responses and track the proportion of alternatives examinees could classify independently before and after taking the test. Finite-state scores were compared with these actual values and with number-correct and formula scores. Finite-state scores proved useful. (TJH)
Descriptors: Comparative Analysis, Computer Simulation, Guessing (Tests), Mathematical Models
Peer reviewed Peer reviewed
van der Ven, A. H. G. S.; Gremmen, F. M. – Applied Psychological Measurement, 1992
A statistical test of the knowledge or random guessing model is presented. A version of the model is introduced in which it is assumed that alternatives can be ordered according to a Guttman scale. Three examples illustrate its application to data from a total of 590 college students. (Author/SLD)
Descriptors: Achievement Tests, College Students, Equations (Mathematics), Guessing (Tests)
Peer reviewed Peer reviewed
Kane, Michael; Moloney, James – Applied Psychological Measurement, 1978
The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Bock, R. Darrell; And Others – Applied Psychological Measurement, 1988
A method of item factor analysis is described, which is based on Thurstone's multiple-factor model and implemented by marginal maximum likelihood estimation and the EM algorithm. Also assessed are the statistical significance of successive factors added to the model, provisions for guessing and omitted items, and Bayes constraints. (TJH)
Descriptors: Algorithms, Bayesian Statistics, Equations (Mathematics), Estimation (Mathematics)
Peer reviewed Peer reviewed
Poizner, Sharon B.; And Others – Applied Psychological Measurement, 1978
Binary, probability, and ordinal scoring procedures for multiple-choice items were examined. In two situations, it was found that both the probability and ordinal scoring systems were more reliable than the binary scoring method. (Author/CTM)
Descriptors: Confidence Testing, Guessing (Tests), Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Jensema, Carl J. – Applied Psychological Measurement, 1977
Owen's Bayesian tailored testing method is introduced along with a brief review of its derivation. The characteristics of a good item bank are outlined and explored in terms of their influence on the Bayesian tailoring process. (Author/RC)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Oriented Programs
Peer reviewed Peer reviewed
Waller, Michael I. – Applied Psychological Measurement, 1989
The fit of the 3-parameter model on data from tests of cognitive ability was compared to the fit of the Ability Removing Random Guessing model of M. I. Waller (1973). Three cohorts of about 1,000 children (fourth, seventh, and tenth graders) each were administered examinations in 11 content areas. (SLD)
Descriptors: Children, Cognitive Ability, Comparative Analysis, Elementary School Students
Previous Page | Next Page ยป
Pages: 1  |  2