NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 63 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xiong Luo – International Journal of Web-Based Learning and Teaching Technologies, 2024
However, although existing models for evaluating the effectiveness of universities provide a large number of modeling solutions, it is difficult to objectively evaluate dynamic coefficients based on the differences in precision ideological and political work systems of different types of universities in the evaluation process of innovative paths…
Descriptors: Educational Research, Ideology, Political Issues, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Boris Forthmann; Benjamin Goecke; Roger E. Beaty – Creativity Research Journal, 2025
Human ratings are ubiquitous in creativity research. Yet, the process of rating responses to creativity tasks -- typically several hundred or thousands of responses, per rater -- is often time-consuming and expensive. Planned missing data designs, where raters only rate a subset of the total number of responses, have been recently proposed as one…
Descriptors: Creativity, Research, Researchers, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Reagan Mozer; Luke Miratrix – Society for Research on Educational Effectiveness, 2023
Background: For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require each document first be manually coded for constructs of interest by trained human raters. These hand-coded scores are then used as a measured outcome for an impact analysis, with the average scores of the treatment group…
Descriptors: Artificial Intelligence, Coding, Randomized Controlled Trials, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Cosemans, Tim; Rosseel, Yves; Gelper, Sarah – Educational and Psychological Measurement, 2022
Exploratory graph analysis (EGA) is a commonly applied technique intended to help social scientists discover latent variables. Yet, the results can be influenced by the methodological decisions the researcher makes along the way. In this article, we focus on the choice regarding the number of factors to retain: We compare the performance of the…
Descriptors: Social Science Research, Research Methodology, Graphs, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Elsenbroich, Corinna; Badham, Jennifer – International Journal of Social Research Methodology, 2023
Agent-based models combine data and theory during both development and use of the model. As models have become increasingly data driven, it is easy to start thinking of agent-based modelling as an empirical method, akin to statistical modelling, and reduce the role of theory. We argue that both types of information are important where the past is…
Descriptors: Models, Futures (of Society), Research Methodology, Systems Approach
Peer reviewed Peer reviewed
Direct linkDirect link
Harel, Daphna; Steele, Russell J. – Journal of Educational and Behavioral Statistics, 2018
Collapsing categories is a commonly used data reduction technique; however, to date there do not exist principled methods to determine whether collapsing categories is appropriate in practice. With ordinal responses under the partial credit model, when collapsing categories, the true model for the collapsed data is no longer a partial credit…
Descriptors: Matrices, Models, Item Response Theory, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Daniel Y.; Harring, Jeffrey R.; Stapleton, Laura M. – Journal of Experimental Education, 2019
Respondent attrition is a common problem in national longitudinal panel surveys. To make full use of the data, weights are provided to account for attrition. Weight adjustments are based on sampling design information and data from the base year; information from subsequent waves is typically not utilized. Alternative methods to address bias from…
Descriptors: Longitudinal Studies, Research Methodology, Research Problems, Data Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Nicole Bohme Carnegie; Masataka Harada; Jennifer L. Hill – Journal of Research on Educational Effectiveness, 2016
A major obstacle to developing evidenced-based policy is the difficulty of implementing randomized experiments to answer all causal questions of interest. When using a nonexperimental study, it is critical to assess how much the results could be affected by unmeasured confounding. We present a set of graphical and numeric tools to explore the…
Descriptors: Randomized Controlled Trials, Simulation, Evidence Based Practice, Barriers
Peer reviewed Peer reviewed
Direct linkDirect link
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre – Journal of Speech, Language, and Hearing Research, 2018
Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…
Descriptors: Behavioral Science Research, Research Methodology, Statistical Analysis, Repetition
Peer reviewed Peer reviewed
Direct linkDirect link
Schoemann, Alexander M.; Miller, Patrick; Pornprasertmanit, Sunthud; Wu, Wei – International Journal of Behavioral Development, 2014
Planned missing data designs allow researchers to increase the amount and quality of data collected in a single study. Unfortunately, the effect of planned missing data designs on power is not straightforward. Under certain conditions using a planned missing design will increase power, whereas in other situations using a planned missing design…
Descriptors: Monte Carlo Methods, Simulation, Sample Size, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Buchanan, Taylor L.; Lohse, Keith R. – Measurement in Physical Education and Exercise Science, 2016
We surveyed researchers in the health and exercise sciences to explore different areas and magnitudes of bias in researchers' decision making. Participants were presented with scenarios (testing a central hypothesis with p = 0.06 or p = 0.04) in a random order and surveyed about what they would do in each scenario. Participants showed significant…
Descriptors: Researchers, Attitudes, Statistical Significance, Bias
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bloom, Howard S.; Porter, Kristin E.; Weiss, Michael J.; Raudenbush, Stephen – Society for Research on Educational Effectiveness, 2013
To date, evaluation research and policy analysis have focused mainly on average program impacts and paid little systematic attention to their variation. Recently, the growing number of multi-site randomized trials that are being planned and conducted make it increasingly feasible to study "cross-site" variation in impacts. Important…
Descriptors: Research Methodology, Policy, Evaluation Research, Randomized Controlled Trials
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Moraveji, Behjat; Jafarian, Koorosh – International Journal of Education and Literacy Studies, 2014
The aim of this paper is to provide an introduction of new imputation algorithms for estimating missing values from official statistics in larger data sets of data pre-processing, or outliers. The goal is to propose a new algorithm called IRMI (iterative robust model-based imputation). This algorithm is able to deal with all challenges like…
Descriptors: Mathematics, Computation, Robustness (Statistics), Regression (Statistics)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ferron, John; Van den Noortgate, Wim; Beretvas, Tasha; Moeyaert, Mariola; Ugille, Maaike; Petit-Bois, Merlande; Baek, Eun Kyeng – Society for Research on Educational Effectiveness, 2013
Single-case or single-subject experimental designs (SSED) are used to evaluate the effect of one or more treatments on a single case. Although SSED studies are growing in popularity, the results are in theory case-specific. One systematic and statistical approach for combining single-case data within and across studies is multilevel modeling. The…
Descriptors: Comparative Analysis, Intervention, Experiments, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Geiser, Christian; Lockhart, Ginger – Psychological Methods, 2012
Latent state-trait (LST) analysis is frequently applied in psychological research to determine the degree to which observed scores reflect stable person-specific effects, effects of situations and/or person-situation interactions, and random measurement error. Most LST applications use multiple repeatedly measured observed variables as indicators…
Descriptors: Psychological Studies, Simulation, Measurement, Error of Measurement
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5