NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Teachers1
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Penaloza, Roberto V.; Berends, Mark – Sociological Methods & Research, 2022
To measure "treatment" effects, social science researchers typically rely on nonexperimental data. In education, school and teacher effects on students are often measured through value-added models (VAMs) that are not fully understood. We propose a framework that relates to the education production function in its most flexible form and…
Descriptors: Data, Value Added Models, Error of Measurement, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
De Raadt, Alexandra; Warrens, Matthijs J.; Bosker, Roel J.; Kiers, Henk A. L. – Educational and Psychological Measurement, 2019
Cohen's kappa coefficient is commonly used for assessing agreement between classifications of two raters on a nominal scale. Three variants of Cohen's kappa that can handle missing data are presented. Data are considered missing if one or both ratings of a unit are missing. We study how well the variants estimate the kappa value for complete data…
Descriptors: Interrater Reliability, Data, Statistical Analysis, Statistical Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qian, Jiahe – ETS Research Report Series, 2020
The finite population correction (FPC) factor is often used to adjust variance estimators for survey data sampled from a finite population without replacement. As a replicated resampling approach, the jackknife approach is usually implemented without the FPC factor incorporated in its variance estimates. A paradigm is proposed to compare the…
Descriptors: Computation, Sampling, Data, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Rocabado, Guizella A.; Komperda, Regis; Lewis, Jennifer E.; Barbera, Jack – Chemistry Education Research and Practice, 2020
As the field of chemistry education moves toward greater inclusion and increased participation by underrepresented minorities, standards for investigating the differential impacts and outcomes of learning environments have to be considered. While quantitative methods may not be capable of generating the in-depth nuances of qualitative methods,…
Descriptors: Chemistry, Science Education, Inclusion, Equal Education
Peer reviewed Peer reviewed
Direct linkDirect link
Miles, Andrew – Sociological Methods & Research, 2016
Obtaining predictions from regression models fit to multiply imputed data can be challenging because treatments of multiple imputation seldom give clear guidance on how predictions can be calculated, and because available software often does not have built-in routines for performing the necessary calculations. This research note reviews how…
Descriptors: Prediction, Regression (Statistics), Data, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Petrosino, Anthony J.; Mann, Michele J. – Journal of College Science Teaching, 2018
Although data modeling, the employment of statistical reasoning for the purpose of investigating questions about the world, is central to both mathematics and science, it is rarely emphasized in K-16 instruction. The current work focuses on developing thinking about data modeling with undergraduates in general and preservice teachers in…
Descriptors: Undergraduate Students, Preservice Teachers, Mathematical Models, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David; De Boeck, Paul – Educational and Psychological Measurement, 2012
The identification of differential item functioning (DIF) is often performed by means of statistical approaches that consider the raw scores as proxies for the ability trait level. One of the most popular approaches, the Mantel-Haenszel (MH) method, belongs to this category. However, replacing the ability level by the simple raw score is a source…
Descriptors: Test Bias, Data, Error of Measurement, Raw Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z. – Society for Research on Educational Effectiveness, 2014
Randomized controlled trials (RCTs) are considered the "gold standard" for evaluating an intervention's effectiveness. Recently, the federal government has placed increased emphasis on the use of opportunistic experiments. A key criterion for conducting opportunistic experiments, however, is that there is relatively easy access to data…
Descriptors: Randomized Controlled Trials, Outcomes of Treatment, Intervention, Program Effectiveness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Elosua, Paula – Psicologica: International Journal of Methodology and Experimental Psychology, 2011
Assessing measurement equivalence in the framework of the common factor linear models (CFL) is known as factorial invariance. This methodology is used to evaluate the equivalence among the parameters of a measurement model among different groups. However, when dichotomous, Likert, or ordered responses are used, one of the assumptions of the CFL is…
Descriptors: Measurement, Models, Data, Factor Analysis
Warachan, Boonyasit – ProQuest LLC, 2011
The objective of this research was to determine the robustness and statistical power of three different methods for testing the hypothesis that ordinal samples of five and seven Likert categories come from equal populations. The three methods are the two sample t-test with equal variances, the Mann-Whitney test, and the Kolmogorov-Smirnov test. In…
Descriptors: Statistical Analysis, Likert Scales, Hypothesis Testing, Data
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes; French, Brian F. – Structural Equation Modeling: A Multidisciplinary Journal, 2011
The purpose of this simulation study was to assess the performance of latent variable models that take into account the complex sampling mechanism that often underlies data used in educational, psychological, and other social science research. Analyses were conducted using the multiple indicator multiple cause (MIMIC) model, which is a flexible…
Descriptors: Causal Models, Computation, Data, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Gemici, Sinan; Bednarz, Alice; Lim, Patrick – International Journal of Training Research, 2012
Quantitative research in vocational education and training (VET) is routinely affected by missing or incomplete information. However, the handling of missing data in published VET research is often sub-optimal, leading to a real risk of generating results that can range from being slightly biased to being plain wrong. Given that the growing…
Descriptors: Vocational Education, Educational Research, Data, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2014
This "What Works Clearinghouse Procedures and Standards Handbook (Version 3.0)" provides a detailed description of the standards and procedures of the What Works Clearinghouse (WWC). The remaining chapters of this Handbook are organized to take the reader through the basic steps that the WWC uses to develop a review protocol, identify…
Descriptors: Educational Research, Guides, Intervention, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Pullin, Andrew S.; Knight, Teri M. – New Directions for Evaluation, 2009
To use environmental program evaluation to increase effectiveness, predictive power, and resource allocation efficiency, evaluators need good data. Data require sufficient credibility in terms of fitness for purpose and quality to develop the necessary evidence base. The authors examine elements of data credibility using experience from critical…
Descriptors: Data, Credibility, Conservation (Environment), Program Evaluation