NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 39 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Phillip K. Wood – Structural Equation Modeling: A Multidisciplinary Journal, 2024
The logistic and confined exponential curves are frequently used in studies of growth and learning. These models, which are nonlinear in their parameters, can be estimated using structural equation modeling software. This paper proposes a single combined model, a weighted combination of both models. Mplus, Proc Calis, and lavaan code for the model…
Descriptors: Structural Equation Models, Computation, Computer Software, Weighted Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Chiu, Loren Z. F.; Daehlin, Torstein E. – Measurement in Physical Education and Exercise Science, 2020
Males (n = 29) and females (n = 34) performed vertical jumps. Jump height was estimated from force platform data using five numerical methods and compared using intraclass correlation ([rho]), and linear and rank regression standard error of estimate ("SEE"). Take-off velocity plus center of mass height at take-off and mechanical work…
Descriptors: Physical Activities, Scientific Concepts, Computation, Motion
Peer reviewed Peer reviewed
Direct linkDirect link
Marcoulides, Katerina M. – Measurement: Interdisciplinary Research and Perspectives, 2019
Longitudinal data analysis has received widespread interest throughout educational, behavioral, and social science research, with latent growth curve modeling currently being one of the most popular methods of analysis. Despite the popularity of latent growth curve modeling, limited attention has been directed toward understanding the issues of…
Descriptors: Reliability, Longitudinal Studies, Growth Models, Structural Equation Models
Peer reviewed Peer reviewed
Direct linkDirect link
Nicewander, W. Alan – Educational and Psychological Measurement, 2018
Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…
Descriptors: Error of Measurement, Correlation, Sample Size, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine – Applied Measurement in Education, 2015
In generalizability theory studies in large-scale testing contexts, sometimes a facet is very sparsely crossed with the object of measurement. For example, when assessments are scored by human raters, it may not be practical to have every rater score all students. Sometimes the scoring is systematically designed such that the raters are…
Descriptors: Educational Assessment, Measurement, Data, Generalizability Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Culpepper, Steven Andrew – Applied Psychological Measurement, 2013
A classic topic in the fields of psychometrics and measurement has been the impact of the number of scale categories on test score reliability. This study builds on previous research by further articulating the relationship between item response theory (IRT) and classical test theory (CTT). Equations are presented for comparing the reliability and…
Descriptors: Item Response Theory, Reliability, Scores, Error of Measurement
Grochowalski, Joseph H. – ProQuest LLC, 2015
Component Universe Score Profile analysis (CUSP) is introduced in this paper as a psychometric alternative to multivariate profile analysis. The theoretical foundations of CUSP analysis are reviewed, which include multivariate generalizability theory and constrained principal components analysis. Because CUSP is a combination of generalizability…
Descriptors: Computation, Psychometrics, Profiles, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Structural Equation Modeling: A Multidisciplinary Journal, 2012
A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…
Descriptors: Predictive Validity, Reliability, Structural Equation Models, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Moses, Tim – Journal of Educational Measurement, 2012
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
Descriptors: Error of Measurement, Prediction, Regression (Statistics), True Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Fife, Dustin A.; Mendoza, Jorge L.; Terry, Robert – Educational and Psychological Measurement, 2012
Though much research and attention has been directed at assessing the correlation coefficient under range restriction, the assessment of reliability under range restriction has been largely ignored. This article uses item response theory to simulate dichotomous item-level data to assess the robustness of KR-20 ([alpha]), [omega], and test-retest…
Descriptors: Reliability, Computation, Comparative Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Beauducel, Andre – Applied Psychological Measurement, 2013
The problem of factor score indeterminacy implies that the factor and the error scores cannot be completely disentangled in the factor model. It is therefore proposed to compute Harman's factor score predictor that contains an additive combination of factor and error variance. This additive combination is discussed in the framework of classical…
Descriptors: Factor Analysis, Predictor Variables, Reliability, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Wei, Xin; Haertel, Edward – Educational Measurement: Issues and Practice, 2011
Contemporary educational accountability systems, including state-level systems prescribed under No Child Left Behind as well as those envisioned under the "Race to the Top" comprehensive assessment competition, rely on school-level summaries of student test scores. The precision of these score summaries is almost always evaluated using models that…
Descriptors: Scores, Reliability, Computation, Generalizability Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Jackman, M. Grace-Anne; Leite, Walter L.; Cochrane, David J. – Structural Equation Modeling: A Multidisciplinary Journal, 2011
This Monte Carlo simulation study investigated methods of forming product indicators for the unconstrained approach for latent variable interaction estimation when the exogenous factors are measured by large and unequal numbers of indicators. Product indicators were created based on multiplying parcels of the larger scale by indicators of the…
Descriptors: Computation, Statistical Data, Structural Equation Models, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Padilla, Miguel A.; Veprinsky, Anna – Educational and Psychological Measurement, 2012
Issues with correlation attenuation due to measurement error are well documented. More than a century ago, Spearman proposed a correction for attenuation. However, this correction has seen very little use since it can potentially inflate the true correlation beyond one. In addition, very little confidence interval (CI) research has been done for…
Descriptors: Correlation, Error of Measurement, Sampling, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Pan, Tianshu; Yin, Yue – Psychological Methods, 2012
In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…
Descriptors: Error of Measurement, Geometric Concepts, Tests, Structural Equation Models
Previous Page | Next Page »
Pages: 1  |  2  |  3