Publication Date
In 2025 | 2 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 25 |
Since 2016 (last 10 years) | 71 |
Since 2006 (last 20 years) | 218 |
Descriptor
Error of Measurement | 342 |
Computation | 58 |
Statistical Analysis | 52 |
Evaluation Methods | 51 |
Item Response Theory | 51 |
Measurement Techniques | 46 |
Reliability | 46 |
Scores | 37 |
Research Methodology | 34 |
Test Construction | 34 |
Correlation | 33 |
More ▼ |
Source
Author
Publication Type
Education Level
Location
New York | 7 |
Australia | 2 |
Germany | 2 |
New Mexico | 2 |
North America | 2 |
Pennsylvania | 2 |
Tennessee | 2 |
Texas | 2 |
United States | 2 |
Africa | 1 |
Asia | 1 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 2 |
Guaranteed Student Loan… | 1 |
Race to the Top | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Susan K. Johnsen – Gifted Child Today, 2025
The author provides information about reliability and areas that educators should examine in determining if an assessment is consistent and trustworthy for use, and how it should be interpreted in making decisions about students. Reliability areas that are discussed in the column include internal consistency, test-retest or stability, inter-scorer…
Descriptors: Test Reliability, Academically Gifted, Student Evaluation, Error of Measurement
Carlos Cinelli; Andrew Forney; Judea Pearl – Sociological Methods & Research, 2024
Many students of statistics and econometrics express frustration with the way a problem known as "bad control" is treated in the traditional literature. The issue arises when the addition of a variable to a regression equation produces an unintended discrepancy between the regression coefficient and the effect that the coefficient is…
Descriptors: Regression (Statistics), Robustness (Statistics), Error of Measurement, Testing Problems
Johan Lyrvall; Zsuzsa Bakk; Jennifer Oser; Roberto Di Mari – Structural Equation Modeling: A Multidisciplinary Journal, 2024
We present a bias-adjusted three-step estimation approach for multilevel latent class models (LC) with covariates. The proposed approach involves (1) fitting a single-level measurement model while ignoring the multilevel structure, (2) assigning units to latent classes, and (3) fitting the multilevel model with the covariates while controlling for…
Descriptors: Hierarchical Linear Modeling, Statistical Bias, Error of Measurement, Simulation
So, Julia Wai-Yin – Assessment Update, 2023
In this article, Julia So discusses the purpose of program assessment, four common missteps of program assessment and reporting, and how to prevent them. The four common missteps of program assessment and reporting she has observed are: (1) unclear or ambiguous program goals; (2) measurement error of program goals and outcomes; (3) incorrect unit…
Descriptors: Program Evaluation, Community Colleges, Evaluation Methods, Objectives
Davidson, Allison; Gundlach, Ellen – International Journal of Mathematical Education in Science and Technology, 2023
A disadvantage to online clothes shopping is the inability to try on clothing to test the fit. A class project is discussed where students consult with the CEO of an online mensware clothing company to explore ways in which an online clothing customer can be assured of a superior fit by developing statistical models based on a shopper's height and…
Descriptors: Internet, Retailing, Prediction, Clothing
Huang, Francis L. – Journal of Educational and Behavioral Statistics, 2022
The presence of clustered data is common in the sociobehavioral sciences. One approach that specifically deals with clustered data but has seen little use in education is the generalized estimating equations (GEEs) approach. We provide a background on GEEs, discuss why it is appropriate for the analysis of clustered data, and provide worked…
Descriptors: Multivariate Analysis, Computation, Correlation, Error of Measurement
Philipp Sterner; Kim De Roover; David Goretzko – Structural Equation Modeling: A Multidisciplinary Journal, 2025
When comparing relations and means of latent variables, it is important to establish measurement invariance (MI). Most methods to assess MI are based on confirmatory factor analysis (CFA). Recently, new methods have been developed based on exploratory factor analysis (EFA); most notably, as extensions of multi-group EFA, researchers introduced…
Descriptors: Error of Measurement, Measurement Techniques, Factor Analysis, Structural Equation Models
Raykov, Tenko; Marcoulides, George A. – Measurement: Interdisciplinary Research and Perspectives, 2023
This article outlines a readily applicable procedure for point and interval estimation of the population discrepancy between reliability and the popular Cronbach's coefficient alpha for unidimensional multi-component measuring instruments with uncorrelated errors, which are widely used in behavioral and social research. The method is developed…
Descriptors: Measurement, Test Reliability, Measurement Techniques, Error of Measurement
Pogrow, Stanley – Phi Delta Kappan, 2023
Educators who are urged to use evidence-based practices to improve instruction often end up disappointed at the results, which fall short of those touted in the research and by the What Works Clearinghouse. Stanley Pogrow explains how common strategies researchers use to demonstrate evidence of success, such as statistical significance of or…
Descriptors: Evidence Based Practice, Instructional Improvement, Educational Research, Error of Measurement
Teck Kiang Tan – Practical Assessment, Research & Evaluation, 2024
The procedures of carrying out factorial invariance to validate a construct were well developed to ensure the reliability of the construct that can be used across groups for comparison and analysis, yet mainly restricted to the frequentist approach. This motivates an update to incorporate the growing Bayesian approach for carrying out the Bayesian…
Descriptors: Bayesian Statistics, Factor Analysis, Programming Languages, Reliability
Noma, Hisashi; Hamura, Yasuyuki; Gosho, Masahiko; Furukawa, Toshi A. – Research Synthesis Methods, 2023
Network meta-analysis has been an essential methodology of systematic reviews for comparative effectiveness research. The restricted maximum likelihood (REML) method is one of the current standard inference methods for multivariate, contrast-based meta-analysis models, but recent studies have revealed the resultant confidence intervals of average…
Descriptors: Network Analysis, Meta Analysis, Regression (Statistics), Error of Measurement
Dan Soriano; Eli Ben-Michael; Peter Bickel; Avi Feller; Samuel D. Pimentel – Grantee Submission, 2023
Assessing sensitivity to unmeasured confounding is an important step in observational studies, which typically estimate effects under the assumption that all confounders are measured. In this paper, we develop a sensitivity analysis framework for balancing weights estimators, an increasingly popular approach that solves an optimization problem to…
Descriptors: Statistical Analysis, Computation, Mathematical Formulas, Monte Carlo Methods
Kelly, Matthew Gardner; Farrie, Danielle – Educational Researcher, 2023
This brief describes how several commonly used per-pupil funding measures derived from federal data include passthrough funding in the numerator but exclude students attached to this funding from the denominator, artificially inflating per-pupil ratios. Three forms of passthrough funding for students not educated by the school district where they…
Descriptors: Educational Finance, Expenditure per Student, Data Use, Error of Measurement
Demarest, Leila; Langer, Arnim – Sociological Methods & Research, 2022
While conflict event data sets are increasingly used in contemporary conflict research, important concerns persist regarding the quality of the collected data. Such concerns are not necessarily new. Yet, because the methodological debate and evidence on potential errors remains scattered across different subdisciplines of social sciences, there is…
Descriptors: Guidelines, Research Methodology, Conflict, Social Science Research
Raykov, Tenko; DiStefano, Christine; Calvocoressi, Lisa; Volker, Martin – Educational and Psychological Measurement, 2022
A class of effect size indices are discussed that evaluate the degree to which two nested confirmatory factor analysis models differ from each other in terms of fit to a set of observed variables. These descriptive effect measures can be used to quantify the impact of parameter restrictions imposed in an initially considered model and are free…
Descriptors: Effect Size, Models, Measurement Techniques, Factor Analysis