NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers3
Laws, Policies, & Programs
Race to the Top1
What Works Clearinghouse Rating
Showing 1 to 15 of 47 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Rebekka Kupffer; Susanne Frick; Eunike Wetzel – Educational and Psychological Measurement, 2024
The multidimensional forced-choice (MFC) format is an alternative to rating scales in which participants rank items according to how well the items describe them. Currently, little is known about how to detect careless responding in MFC data. The aim of this study was to adapt a number of indices used for rating scales to the MFC format and…
Descriptors: Measurement Techniques, Alternative Assessment, Rating Scales, Questionnaires
Peer reviewed Peer reviewed
Direct linkDirect link
Manuel T. Rein; Jeroen K. Vermunt; Kim De Roover; Leonie V. D. E. Vogelsmeier – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Researchers often study dynamic processes of latent variables in everyday life, such as the interplay of positive and negative affect over time. An intuitive approach is to first estimate the measurement model of the latent variables, then compute factor scores, and finally use these factor scores as observed scores in vector autoregressive…
Descriptors: Measurement Techniques, Factor Analysis, Scores, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Bogaert, Jasper; Loh, Wen Wei; Rosseel, Yves – Educational and Psychological Measurement, 2023
Factor score regression (FSR) is widely used as a convenient alternative to traditional structural equation modeling (SEM) for assessing structural relations between latent variables. But when latent variables are simply replaced by factor scores, biases in the structural parameter estimates often have to be corrected, due to the measurement error…
Descriptors: Factor Analysis, Regression (Statistics), Structural Equation Models, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Haberman, Shelby J. – ETS Research Report Series, 2020
Best linear prediction (BLP) and penalized best linear prediction (PBLP) are techniques for combining sources of information to produce task scores, section scores, and composite test scores. The report examines issues to consider in operational implementation of BLP and PBLP in testing programs administered by ETS [Educational Testing Service].
Descriptors: Prediction, Scores, Tests, Testing Programs
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Forrow, Lauren; Starling, Jennifer; Gill, Brian – Regional Educational Laboratory Mid-Atlantic, 2023
The Every Student Succeeds Act requires states to identify schools with low-performing student subgroups for Targeted Support and Improvement or Additional Targeted Support and Improvement. Random differences between students' true abilities and their test scores, also called measurement error, reduce the statistical reliability of the performance…
Descriptors: At Risk Students, Low Achievement, Error of Measurement, Measurement Techniques
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Regional Educational Laboratory Mid-Atlantic, 2023
This Snapshot highlights key findings from a study that used Bayesian stabilization to improve the reliability (long-term stability) of subgroup proficiency measures that the Pennsylvania Department of Education (PDE) uses to identify schools for Targeted Support and Improvement (TSI) or Additional Targeted Support and Improvement (ATSI). The…
Descriptors: At Risk Students, Low Achievement, Error of Measurement, Measurement Techniques
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Regional Educational Laboratory Mid-Atlantic, 2023
The "Stabilizing Subgroup Proficiency Results to Improve the Identification of Low-Performing Schools" study used Bayesian stabilization to improve the reliability (long-term stability) of subgroup proficiency measures that the Pennsylvania Department of Education (PDE) uses to identify schools for Targeted Support and Improvement (TSI)…
Descriptors: At Risk Students, Low Achievement, Error of Measurement, Measurement Techniques
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sekercioglu, Güçlü – International Online Journal of Education and Teaching, 2018
An empirical evidence for independent samples of a population regarding measurement invariance implies that factor structure of a measurement tool is equal across these samples; in other words, it measures the intended psychological trait within the same structure. In this case, the evidence of construct validity would be strengthened within the…
Descriptors: Factor Analysis, Error of Measurement, Factor Structure, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Sun-Joo; Preacher, Kristopher J. – Educational and Psychological Measurement, 2016
Multilevel modeling (MLM) is frequently used to detect cluster-level group differences in cluster randomized trial and observational studies. Group differences on the outcomes (posttest scores) are detected by controlling for the covariate (pretest scores) as a proxy variable for unobserved factors that predict future attributes. The pretest and…
Descriptors: Error of Measurement, Error Correction, Multivariate Analysis, Hierarchical Linear Modeling
Peer reviewed Peer reviewed
Direct linkDirect link
Methe, Scott A.; Briesch, Amy M.; Hulac, David – Assessment for Effective Intervention, 2015
At present, it is unclear whether math curriculum-based measurement (M-CBM) procedures provide a dependable measure of student progress in math computation because support for its technical properties is based largely upon a body of correlational research. Recent investigations into the dependability of M-CBM scores have found that evaluating…
Descriptors: Measurement Techniques, Error of Measurement, Mathematics Curriculum, Curriculum Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Chia-ling; Shen, I-hsuan; Chen, Chung-yao; Wu, Ching-yi; Liu, Wen-Yu; Chung, Chia-ying – Research in Developmental Disabilities: A Multidisciplinary Journal, 2013
This study examined criterion-related validity and clinimetric properties of the pediatric balance scale ("PBS") in children with cerebral palsy (CP). Forty-five children with CP (age range: 19-77 months) and their parents participated in this study. At baseline and at follow up, Pearson correlation coefficients were used to determine…
Descriptors: Measurement, Measures (Individuals), Correlation, Cerebral Palsy
Peer reviewed Peer reviewed
Direct linkDirect link
Tijmstra, Jesper; Hessen, David J.; van der Heijden, Peter G. M.; Sijtsma, Klaas – Psychometrika, 2013
Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores,…
Descriptors: Item Response Theory, Statistical Inference, Probability, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Reardon, Sean F.; Ho, Andrew D. – Journal of Educational and Behavioral Statistics, 2015
In an earlier paper, we presented methods for estimating achievement gaps when test scores are coarsened into a small number of ordered categories, preventing fine-grained distinctions between individual scores. We demonstrated that gaps can nonetheless be estimated with minimal bias across a broad range of simulated and real coarsened data…
Descriptors: Achievement Gap, Performance Factors, Educational Practices, Scores
Reardon, Sean F.; Ho, Andrew D. – Grantee Submission, 2015
Ho and Reardon (2012) present methods for estimating achievement gaps when test scores are coarsened into a small number of ordered categories, preventing fine-grained distinctions between individual scores. They demonstrate that gaps can nonetheless be estimated with minimal bias across a broad range of simulated and real coarsened data…
Descriptors: Achievement Gap, Performance Factors, Educational Practices, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Burt, Keith B.; Obradovic, Jelena – Developmental Review, 2013
The purpose of this paper is to review major statistical and psychometric issues impacting the study of psychophysiological reactivity and discuss their implications for applied developmental researchers. We first cover traditional approaches such as the observed difference score (DS) and the observed residual score (RS), including a review of…
Descriptors: Measurement Techniques, Psychometrics, Data Analysis, Researchers
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4