NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Livingston, Samuel A. – ETS Research Report Series, 2017
The purpose of this simulation study was to assess the accuracy of a classical test theory (CTT)-based procedure for estimating the alternate-forms reliability of scores on a multistage test (MST) having 3 stages. We generated item difficulty and discrimination parameters for 10 parallel, nonoverlapping forms of the complete 3-stage test and…
Descriptors: Accuracy, Test Theory, Test Reliability, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kogar, Hakan – International Journal of Assessment Tools in Education, 2018
The aim of this simulation study, determine the relationship between true latent scores and estimated latent scores by including various control variables and different statistical models. The study also aimed to compare the statistical models and determine the effects of different distribution types, response formats and sample sizes on latent…
Descriptors: Simulation, Context Effect, Computation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, Andre – Applied Psychological Measurement, 2013
The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…
Descriptors: Factor Analysis, Predictor Variables, Reliability, Error of Measurement
Deng, Nina – ProQuest LLC, 2011
Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…
Descriptors: Item Response Theory, Test Theory, Computation, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Jinming – ETS Research Report Series, 2004
This paper extends the theory of conditional covariances to polytomous items. It has been mathematically proven that under some mild conditions, commonly assumed in the analysis of response data, the conditional covariance of two items, dichotomously or polytomously scored, is positive if the two items are dimensionally homogeneous and negative…
Descriptors: Test Items, Test Theory, Correlation, National Competency Tests
Epstein, Kenneth I.; Knerr, Claramae S. – 1976
The literature on criterion referenced testing is full of discussions concerning whether classical measurement techniques are appropriate, whether variance is necessary, whether new indices of reliability are needed, and the like. What appears to be lacking, however, is a clear and simple discussion of why the problems occur. This paper suggests…
Descriptors: Career Development, Criterion Referenced Tests, Item Analysis, Item Sampling
Weitzman, R. A. – 1982
The goal of this research was to predict from a recruit's responses to the Armed Services Vocational Aptitude Battery (ASVAB) items whether the recruit would pass the Armed Forces Qualification Test (AFQT). The data consisted of the responses (correct/incorrect) of 1,020 Navy recruits to 200 items of the ASVAB together with the scores of these…
Descriptors: Adults, Armed Forces, Computer Oriented Programs, Computer Simulation
Marshall, J. Laird – 1976
A summary is provided of the rationale for questioning the applicability of classical reliability measures to criterion referenced tests; an extension of the classical theory of true and error scores to incorporate a theory of dichotomous decisions; a presentation of the mean split-half coefficient of agreement, a single-administration test index…
Descriptors: Career Development, Computer Programs, Criterion Referenced Tests, Decision Making
Yen, Wendy M. – 1979
Three test-analysis models were used to analyze three types of simulated test score data plus the results of eight achievement tests. Chi-square goodness-of-fit statistics were used to evaluate the appropriateness of the models to the four kinds of data. Data were generated to simulate the responses of 1,000 students to 36 pseudo-items by…
Descriptors: Achievement Tests, Correlation, Goodness of Fit, Item Analysis
Sarvela, Paul D. – 1986
Four discrimination indices were compared, using score distributions which were normal, bimodal, and negatively skewed. The score distributions were systematically varied to represent the common circumstances of a military training situation using criterion-referenced mastery tests. Three 20-item tests were administered to 110 simulated subjects.…
Descriptors: Comparative Analysis, Criterion Referenced Tests, Item Analysis, Mastery Tests