Publication Date
| In 2026 | 0 |
| Since 2025 | 26 |
| Since 2022 (last 5 years) | 132 |
| Since 2017 (last 10 years) | 320 |
| Since 2007 (last 20 years) | 709 |
Descriptor
| Statistical Bias | 1399 |
| Statistical Analysis | 363 |
| Error of Measurement | 300 |
| Computation | 231 |
| Sampling | 224 |
| Research Methodology | 217 |
| Research Problems | 197 |
| Sample Size | 193 |
| Comparative Analysis | 185 |
| Correlation | 171 |
| Simulation | 157 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Location
| Australia | 15 |
| Netherlands | 14 |
| North Carolina | 13 |
| Germany | 12 |
| United States | 12 |
| California | 11 |
| Texas | 11 |
| United Kingdom | 10 |
| Canada | 9 |
| New York | 8 |
| United Kingdom (England) | 8 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards with or without Reservations | 1 |
Francis Huang; Brian Keller – Large-scale Assessments in Education, 2025
Missing data are common with large scale assessments (LSAs). A typical approach to handling missing data with LSAs is the use of listwise deletion, despite decades of research showing that approach can be a suboptimal strategy resulting in biased estimates. In order to help researchers account for missing data, we provide a tutorial using R and…
Descriptors: Research Problems, Data Analysis, Statistical Bias, International Assessment
Kelsey Nason; Christine DeMars – Journal of Educational Measurement, 2025
This study examined the widely used threshold of 0.2 for Yen's Q3, an index for violations of local independence. Specifically, a simulation was conducted to investigate whether Q3 values were related to the magnitude of bias in estimates of reliability, item parameters, and examinee ability. Results showed that Q3 values below the typical cut-off…
Descriptors: Item Response Theory, Statistical Bias, Test Reliability, Test Items
Sohaib Ahmad; Javid Shabbir – Measurement: Interdisciplinary Research and Perspectives, 2025
This study aims to suggest a generalized class of estimators for population proportion under simple random sampling, which uses auxiliary attributes. The bias and MSEs are considered derived to the first degree approximation. The validity of the suggested and existing estimators is assessed via an empirical investigation. The performance of…
Descriptors: Computation, Sampling, Data Collection, Data Analysis
Abdul Haq; Muhammad Usman; Manzoor Khan – Measurement: Interdisciplinary Research and Perspectives, 2024
Measurement errors may significantly distort the properties of an estimator. In this paper, estimators of the finite population variance using the information on first and second raw moments of the study variable are developed under stratified random sampling that incorporate the variance of a measurement error component. Additionally, combined…
Descriptors: Sampling, Error of Measurement, Evaluation Methods, Statistical Bias
Hans-Peter Piepho; Johannes Forkman; Waqas Ahmed Malik – Research Synthesis Methods, 2024
Checking for possible inconsistency between direct and indirect evidence is an important task in network meta-analysis. Recently, an evidence-splitting (ES) model has been proposed, that allows separating direct and indirect evidence in a network and hence assessing inconsistency. A salient feature of this model is that the variance for…
Descriptors: Maximum Likelihood Statistics, Evidence, Networks, Meta Analysis
Bo Zhang; Jing Luo; Susu Zhang; Tianjun Sun; Don C. Zhang – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Oblique bifactor models, where group factors are allowed to correlate with one another, are commonly used. However, the lack of research on the statistical properties of oblique bifactor models renders the statistical validity of empirical findings questionable. Therefore, the present study took the first step to examine the statistical properties…
Descriptors: Correlation, Predictor Variables, Monte Carlo Methods, Statistical Bias
Yi Feng – Asia Pacific Education Review, 2024
Causal inference is a central topic in education research, although oftentimes it relies on observational studies, which makes causal identification methodologically challenging. This manuscript introduces causal graphs as a powerful language for elucidating causal theories and an effective tool for causal identification analysis. It discusses…
Descriptors: Causal Models, Graphs, Educational Research, Educational Researchers
Kuan-Yu Jin; Yi-Jhen Wu; Ming Ming Chiu – Measurement: Interdisciplinary Research and Perspectives, 2025
Many education tests and psychological surveys elicit respondent views of similar constructs across scenarios (e.g., story followed by multiple choice questions) by repeating common statements across scales (one-statement-multiple-scale, OSMS). However, a respondent's earlier responses to the common statement can affect later responses to it…
Descriptors: Administrator Surveys, Teacher Surveys, Responses, Test Items
Damaris D. E. Carlisle – Sage Research Methods Cases, 2025
This case study explores the use of large language models (LLMs) as analytical partners for data exploration and interpretation. Grounded in original research, it navigates the intricacies of using LLMs for uncovering themes from datasets. The study tackles various methodological and practical challenges encountered during the research process…
Descriptors: Artificial Intelligence, Natural Language Processing, Data Analysis, Data Interpretation
Liang, Qianru; de la Torre, Jimmy; Law, Nancy – Journal of Educational and Behavioral Statistics, 2023
To expand the use of cognitive diagnosis models (CDMs) to longitudinal assessments, this study proposes a bias-corrected three-step estimation approach for latent transition CDMs with covariates by integrating a general CDM and a latent transition model. The proposed method can be used to assess changes in attribute mastery status and attribute…
Descriptors: Cognitive Measurement, Models, Statistical Bias, Computation
Hsin-Yun Lee; You-Lin Chen; Li-Jen Weng – Journal of Experimental Education, 2024
The second version of Kaiser's Measure of Sampling Adequacy (MSA[subscript 2]) has been widely applied to assess the factorability of data in psychological research. The MSA[subscript 2] is developed in the population and little is known about its behavior in finite samples. If estimated MSA[subscript 2]s are biased due to sampling errors,…
Descriptors: Error of Measurement, Reliability, Sampling, Statistical Bias
Liyang Sun; Eli Ben-Michael; Avi Feller – Grantee Submission, 2024
The synthetic control method (SCM) is a popular approach for estimating the impact of a treatment on a single unit with panel data. Two challenges arise with higher frequency data (e.g., monthly versus yearly): (1) achieving excellent pre-treatment fit is typically more challenging; and (2) overfitting to noise is more likely. Aggregating data…
Descriptors: Evaluation Methods, Comparative Analysis, Computation, Data Analysis
Timothy R. Konold; Elizabeth A. Sanders – Measurement: Interdisciplinary Research and Perspectives, 2024
Compared to traditional confirmatory factor analysis (CFA), exploratory structural equation modeling (ESEM) has been shown to result in less structural parameter bias when cross-loadings (CLs) are present. However, when model fit is reasonable for CFA (over ESEM), CFA should be preferred on the basis of parsimony. Using simulations, the current…
Descriptors: Structural Equation Models, Factor Analysis, Factor Structure, Goodness of Fit
A. R. Georgeson – Structural Equation Modeling: A Multidisciplinary Journal, 2025
There is increasing interest in using factor scores in structural equation models and there have been numerous methodological papers on the topic. Nevertheless, sum scores, which are computed from adding up item responses, continue to be ubiquitous in practice. It is therefore important to compare simulation results involving factor scores to…
Descriptors: Structural Equation Models, Scores, Factor Analysis, Statistical Bias
Timothy R. Konold; Elizabeth A. Sanders; Kelvin Afolabi – Structural Equation Modeling: A Multidisciplinary Journal, 2025
Measurement invariance (MI) is an essential part of validity evidence concerned with ensuring that tests function similarly across groups, contexts, and time. Most evaluations of MI involve multigroup confirmatory factor analyses (MGCFA) that assume simple structure. However, recent research has shown that constraining non-target indicators to…
Descriptors: Evaluation Methods, Error of Measurement, Validity, Monte Carlo Methods

Peer reviewed
Direct link
