NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Li, Feifei – ETS Research Report Series, 2017
An information-correction method for testlet-based tests is introduced. This method takes advantage of both generalizability theory (GT) and item response theory (IRT). The measurement error for the examinee proficiency parameter is often underestimated when a unidimensional conditional-independence IRT model is specified for a testlet dataset. By…
Descriptors: Item Response Theory, Generalizability Theory, Tests, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, Andre – Applied Psychological Measurement, 2013
The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…
Descriptors: Factor Analysis, Predictor Variables, Reliability, Error of Measurement
Peer reviewed Peer reviewed
Stevens, Joseph J.; Aleamoni, Lawrence, M. – Educational and Psychological Measurement, 1986
Prior standardization of scores when an aggregate score is formed has been criticized. This article presents a demonstration of the effects of differential weighting of aggregate components that clarifies the need for prior standardization. The role of standardization in statistics and the use of aggregate scores in research are discussed.…
Descriptors: Correlation, Error of Measurement, Factor Analysis, Raw Scores
Peer reviewed Peer reviewed
Ferrando, Pere J.; Lorenzo, Urbano – Educational and Psychological Measurement, 1998
A program for obtaining ability estimates and their standard errors under a variety of psychometric models is documented. The general models considered are (1) classical test theory; (2) item factor analysis for continuous censored responses; and (3) unidimensional and multidimensional item response theory graded response models. (SLD)
Descriptors: Ability, Error of Measurement, Estimation (Mathematics), Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
McDonald, Roderick P. – Alberta Journal of Educational Research, 2003
The concept of a behavior domain is a reasonable and essential foundation for psychometric work based on true score theory, the linear model of common factor analysis, and the nonlinear models of item response theory. Investigators applying these models to test data generally treat the true scores or factors or traits as abstractive psychological…
Descriptors: Factor Analysis, Error of Measurement, True Scores, Psychometrics
Stewart, E. Elizabeth – 1981
Context effects are defined as being influences on test performance associated with the content of successively presented test items or sections. Four types of context effects are identified: (1) direct context effects (practice effects) which occur when performance on items is affected by the examinee having been exposed to similar types of…
Descriptors: Context Effect, Data Collection, Error of Measurement, Evaluation Methods
Thompson, Bruce; Borrello, Gloria M. – 1987
Attitude measures frequently produce distributions of item scores that attenuate interitem correlations and thus also distort findings regarding the factor structure underlying the items. An actual data set involving 260 adult subjects' responses to 55 items on the Love Relationships Scale is employed to illustrate empirical methods for…
Descriptors: Adults, Analysis of Covariance, Attitude Measures, Correlation
Marsh, Herbert W.; Hocevar, Dennis – 1986
The advantages of applying confirmatory factor analysis (CFA) to multitrait-multimethod (MTMM) data are widely recognized. However, because CFA as traditionally applied to MTMM data incorporates single indicators of each scale (i.e., each trait/method combination), important weaknesses are the failure to: (1) correct appropriately for measurement…
Descriptors: Computer Software, Construct Validity, Correlation, Error of Measurement