NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 40 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Stella Y.; Lee, Won-Chan – Journal of Educational Measurement, 2023
The current study proposed several variants of simple-structure multidimensional item response theory equating procedures. Four distinct sets of data were used to demonstrate feasibility of proposed equating methods for two different equating designs: a random groups design and a common-item nonequivalent groups design. Findings indicated some…
Descriptors: Item Response Theory, Equated Scores, Monte Carlo Methods, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie Gorney; Sandip Sinharay – Journal of Educational Measurement, 2025
Although there exists an extensive amount of research on subscores and their properties, limited research has been conducted on categorical subscores and their interpretations. In this paper, we focus on the claim of Feinberg and von Davier that categorical subscores are useful for remediation and instructional purposes. We investigate this claim…
Descriptors: Tests, Scores, Test Interpretation, Alternative Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Cher Wong, Cheow – Journal of Educational Measurement, 2015
Building on previous works by Lord and Ogasawara for dichotomous items, this article proposes an approach to derive the asymptotic standard errors of item response theory true score equating involving polytomous items, for equivalent and nonequivalent groups of examinees. This analytical approach could be used in place of empirical methods like…
Descriptors: Item Response Theory, Error of Measurement, True Scores, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Deping; Jiang, Yanlin; von Davier, Alina A. – Journal of Educational Measurement, 2012
This study investigates a sequence of item response theory (IRT) true score equatings based on various scale transformation approaches and evaluates equating accuracy and consistency over time. The results show that the biases and sample variances for the IRT true score equating (both direct and indirect) are quite small (except for the mean/sigma…
Descriptors: True Scores, Equated Scores, Item Response Theory, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Moses, Tim – Journal of Educational Measurement, 2012
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
Descriptors: Error of Measurement, Prediction, Regression (Statistics), True Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Cui, Ying; Zhou, Jiawen – Journal of Educational Measurement, 2009
The attribute hierarchy method (AHM) is a psychometric procedure for classifying examinees' test item responses into a set of structured attribute patterns associated with different components from a cognitive model of task performance. Results from an AHM analysis yield information on examinees' cognitive strengths and weaknesses. Hence, the AHM…
Descriptors: Test Items, True Scores, Psychometrics, Algebra
Peer reviewed Peer reviewed
Lee, Guemin – Journal of Educational Measurement, 2000
Presents and illustrates an appropriate formula for correction for attenuation that can be used in situations in which one measure includes another measure as its part. The formula can be used for computing the correlation coefficient for true scores between total test and part test. (SLD)
Descriptors: Correlation, True Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Monahan, Patrick O.; Lee, Won-Chan; Ankenmann, Robert D. – Journal of Educational Measurement, 2007
A Monte Carlo simulation technique for generating dichotomous item scores is presented that implements (a) a psychometric model with different explicit assumptions than traditional parametric item response theory (IRT) models, and (b) item characteristic curves without restrictive assumptions concerning mathematical form. The four-parameter beta…
Descriptors: True Scores, Psychometrics, Monte Carlo Methods, Correlation
Peer reviewed Peer reviewed
Marks, Edmond; Lindsay, Carl A. – Journal of Educational Measurement, 1972
Examines the effects of four parameters on the accuracy of test equating under a relaxed definition of test form equivalence. The four parameters studied were sample size, test form length, test form reliability, and the correlation between true scores of the test forms to be equated. (CK)
Descriptors: Scores, Test Interpretation, Test Reliability, Test Results
Peer reviewed Peer reviewed
Wilcox, Rand R. – Journal of Educational Measurement, 1987
Four procedures are discussed for obtaining a confidence interval when answer-until-correct scoring is used in multiple choice tests. Simulated data show that the choice of procedure depends upon sample size. (GDC)
Descriptors: Computer Simulation, Multiple Choice Tests, Sample Size, Scoring
Peer reviewed Peer reviewed
Wilcox, Rand R. – Journal of Educational Measurement, 1982
A new model for measuring misinformation is suggested. A modification of Wilcox's strong true-score model, to be used in certain situations, is indicated, since it solves the problem of correcting for guessing without assuming guessing is random. (Author/GK)
Descriptors: Achievement Tests, Guessing (Tests), Mathematical Models, Scoring Formulas
Peer reviewed Peer reviewed
Kane, Michael T.; And Others – Journal of Educational Measurement, 1976
This discussion illustrates the application of generalizability theory to a design commonly employed in the collection of evaluation data and provides a detailed analysis of the dependability of student evaluations of college teaching. (RC)
Descriptors: Course Evaluation, Student Evaluation of Teacher Performance, Test Reliability, True Scores
Peer reviewed Peer reviewed
Livingston, Samuel A. – Journal of Educational Measurement, 1973
Article commented on a study by Harris, who presented formulas for the variance of errors of estimation (of a true score from an observed score) and the variance of errors of prediction (of an observed score from an observed score on a parallel test). (Author/RK)
Descriptors: Criterion Referenced Tests, Measurement, Norm Referenced Tests, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Dong-In; Brennan, Robert; Kolen, Michael – Journal of Educational Measurement, 2005
Four equating methods (3PL true score equating, 3PL observed score equating, beta 4 true score equating, and beta 4 observed score equating) were compared using four equating criteria: first-order equity (FOE), second-order equity (SOE), conditional-mean-squared-error (CMSE) difference, and the equi-percentile equating property. True score…
Descriptors: True Scores, Psychometrics, Equated Scores, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Kupermintz, Haggai – Journal of Educational Measurement, 2004
A decision-theoretic approach to the question of reliability in categorically scored examinations is explored. The concepts of true scores and errors are discussed as they deviate from conventional psychometric definitions and measurement error in categorical scores is cast in terms of misclassifications. A reliability measure based on…
Descriptors: Test Reliability, Error of Measurement, Psychometrics, Test Theory
Previous Page | Next Page ยป
Pages: 1  |  2  |  3