Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 32 |
Descriptor
Error of Measurement | 47 |
Measurement Techniques | 47 |
Scores | 47 |
Correlation | 12 |
Reliability | 12 |
Psychometrics | 11 |
Academic Achievement | 10 |
Test Reliability | 10 |
Achievement Gains | 8 |
Evaluation Methods | 7 |
Mathematics Achievement | 7 |
More ▼ |
Source
Author
Publication Type
Reports - Research | 27 |
Journal Articles | 26 |
Reports - Evaluative | 16 |
Speeches/Meeting Papers | 8 |
Book/Product Reviews | 2 |
Reports - Descriptive | 2 |
Dissertations/Theses -… | 1 |
Tests/Questionnaires | 1 |
Education Level
Middle Schools | 7 |
Junior High Schools | 6 |
Secondary Education | 6 |
Elementary Education | 5 |
Elementary Secondary Education | 5 |
High Schools | 5 |
Higher Education | 4 |
Postsecondary Education | 3 |
Grade 10 | 2 |
Grade 11 | 2 |
Grade 12 | 2 |
More ▼ |
Audience
Researchers | 3 |
Location
Pennsylvania | 3 |
District of Columbia | 2 |
New York | 2 |
Australia | 1 |
California | 1 |
Illinois | 1 |
New Jersey | 1 |
North Carolina | 1 |
Portugal | 1 |
Tennessee | 1 |
Texas | 1 |
More ▼ |
Laws, Policies, & Programs
Race to the Top | 1 |
Assessments and Surveys
General Educational… | 1 |
Graduate Record Examinations | 1 |
New Jersey College Basic… | 1 |
Praxis Series | 1 |
Test of English as a Foreign… | 1 |
Wechsler Adult Intelligence… | 1 |
Woodcock Johnson Psycho… | 1 |
What Works Clearinghouse Rating
Rebekka Kupffer; Susanne Frick; Eunike Wetzel – Educational and Psychological Measurement, 2024
The multidimensional forced-choice (MFC) format is an alternative to rating scales in which participants rank items according to how well the items describe them. Currently, little is known about how to detect careless responding in MFC data. The aim of this study was to adapt a number of indices used for rating scales to the MFC format and…
Descriptors: Measurement Techniques, Alternative Assessment, Rating Scales, Questionnaires
Manuel T. Rein; Jeroen K. Vermunt; Kim De Roover; Leonie V. D. E. Vogelsmeier – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Researchers often study dynamic processes of latent variables in everyday life, such as the interplay of positive and negative affect over time. An intuitive approach is to first estimate the measurement model of the latent variables, then compute factor scores, and finally use these factor scores as observed scores in vector autoregressive…
Descriptors: Measurement Techniques, Factor Analysis, Scores, Validity
Bogaert, Jasper; Loh, Wen Wei; Rosseel, Yves – Educational and Psychological Measurement, 2023
Factor score regression (FSR) is widely used as a convenient alternative to traditional structural equation modeling (SEM) for assessing structural relations between latent variables. But when latent variables are simply replaced by factor scores, biases in the structural parameter estimates often have to be corrected, due to the measurement error…
Descriptors: Factor Analysis, Regression (Statistics), Structural Equation Models, Error of Measurement
Haberman, Shelby J. – ETS Research Report Series, 2020
Best linear prediction (BLP) and penalized best linear prediction (PBLP) are techniques for combining sources of information to produce task scores, section scores, and composite test scores. The report examines issues to consider in operational implementation of BLP and PBLP in testing programs administered by ETS [Educational Testing Service].
Descriptors: Prediction, Scores, Tests, Testing Programs
Forrow, Lauren; Starling, Jennifer; Gill, Brian – Regional Educational Laboratory Mid-Atlantic, 2023
The Every Student Succeeds Act requires states to identify schools with low-performing student subgroups for Targeted Support and Improvement or Additional Targeted Support and Improvement. Random differences between students' true abilities and their test scores, also called measurement error, reduce the statistical reliability of the performance…
Descriptors: At Risk Students, Low Achievement, Error of Measurement, Measurement Techniques
Regional Educational Laboratory Mid-Atlantic, 2023
This Snapshot highlights key findings from a study that used Bayesian stabilization to improve the reliability (long-term stability) of subgroup proficiency measures that the Pennsylvania Department of Education (PDE) uses to identify schools for Targeted Support and Improvement (TSI) or Additional Targeted Support and Improvement (ATSI). The…
Descriptors: At Risk Students, Low Achievement, Error of Measurement, Measurement Techniques
Regional Educational Laboratory Mid-Atlantic, 2023
The "Stabilizing Subgroup Proficiency Results to Improve the Identification of Low-Performing Schools" study used Bayesian stabilization to improve the reliability (long-term stability) of subgroup proficiency measures that the Pennsylvania Department of Education (PDE) uses to identify schools for Targeted Support and Improvement (TSI)…
Descriptors: At Risk Students, Low Achievement, Error of Measurement, Measurement Techniques
Sekercioglu, Güçlü – International Online Journal of Education and Teaching, 2018
An empirical evidence for independent samples of a population regarding measurement invariance implies that factor structure of a measurement tool is equal across these samples; in other words, it measures the intended psychological trait within the same structure. In this case, the evidence of construct validity would be strengthened within the…
Descriptors: Factor Analysis, Error of Measurement, Factor Structure, Construct Validity
Cho, Sun-Joo; Preacher, Kristopher J. – Educational and Psychological Measurement, 2016
Multilevel modeling (MLM) is frequently used to detect cluster-level group differences in cluster randomized trial and observational studies. Group differences on the outcomes (posttest scores) are detected by controlling for the covariate (pretest scores) as a proxy variable for unobserved factors that predict future attributes. The pretest and…
Descriptors: Error of Measurement, Error Correction, Multivariate Analysis, Hierarchical Linear Modeling
Methe, Scott A.; Briesch, Amy M.; Hulac, David – Assessment for Effective Intervention, 2015
At present, it is unclear whether math curriculum-based measurement (M-CBM) procedures provide a dependable measure of student progress in math computation because support for its technical properties is based largely upon a body of correlational research. Recent investigations into the dependability of M-CBM scores have found that evaluating…
Descriptors: Measurement Techniques, Error of Measurement, Mathematics Curriculum, Curriculum Based Assessment
Chen, Chia-ling; Shen, I-hsuan; Chen, Chung-yao; Wu, Ching-yi; Liu, Wen-Yu; Chung, Chia-ying – Research in Developmental Disabilities: A Multidisciplinary Journal, 2013
This study examined criterion-related validity and clinimetric properties of the pediatric balance scale ("PBS") in children with cerebral palsy (CP). Forty-five children with CP (age range: 19-77 months) and their parents participated in this study. At baseline and at follow up, Pearson correlation coefficients were used to determine…
Descriptors: Measurement, Measures (Individuals), Correlation, Cerebral Palsy
Tijmstra, Jesper; Hessen, David J.; van der Heijden, Peter G. M.; Sijtsma, Klaas – Psychometrika, 2013
Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores,…
Descriptors: Item Response Theory, Statistical Inference, Probability, Psychometrics
Reardon, Sean F.; Ho, Andrew D. – Journal of Educational and Behavioral Statistics, 2015
In an earlier paper, we presented methods for estimating achievement gaps when test scores are coarsened into a small number of ordered categories, preventing fine-grained distinctions between individual scores. We demonstrated that gaps can nonetheless be estimated with minimal bias across a broad range of simulated and real coarsened data…
Descriptors: Achievement Gap, Performance Factors, Educational Practices, Scores
Reardon, Sean F.; Ho, Andrew D. – Grantee Submission, 2015
Ho and Reardon (2012) present methods for estimating achievement gaps when test scores are coarsened into a small number of ordered categories, preventing fine-grained distinctions between individual scores. They demonstrate that gaps can nonetheless be estimated with minimal bias across a broad range of simulated and real coarsened data…
Descriptors: Achievement Gap, Performance Factors, Educational Practices, Scores
Burt, Keith B.; Obradovic, Jelena – Developmental Review, 2013
The purpose of this paper is to review major statistical and psychometric issues impacting the study of psychophysiological reactivity and discuss their implications for applied developmental researchers. We first cover traditional approaches such as the observed difference score (DS) and the observed residual score (RS), including a review of…
Descriptors: Measurement Techniques, Psychometrics, Data Analysis, Researchers