NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 223 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lingbo Tong; Wen Qu; Zhiyong Zhang – Grantee Submission, 2025
Factor analysis is widely utilized to identify latent factors underlying the observed variables. This paper presents a comprehensive comparative study of two widely used methods for determining the optimal number of factors in factor analysis, the K1 rule, and parallel analysis, along with a more recently developed method, the bass-ackward method.…
Descriptors: Factor Analysis, Monte Carlo Methods, Statistical Analysis, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Ian Greener – International Journal of Social Research Methodology, 2024
This paper argues for three aspects of tolerance with respect to QCA research: tolerance with respect to different approaches to QCA; producing QCA research with tolerance (work that is resistant to criticism); and for QCA researchers to be clear about the tolerance of the solutions they present -- especially in terms of calibration and truth…
Descriptors: Qualitative Research, Research Methodology, Comparative Analysis, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Douglas B. Downey – Review of Educational Research, 2024
A small subset of education studies analyzes school data collected seasonally (separating the summer from the school year). At first, this work was primarily known for documenting learning loss in the summers, but scholars have since recognized that observing how inequality changes between summer and school periods provides leverage for…
Descriptors: Data Collection, Educational Research, Learning Analytics, Cognitive Ability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Walter M. Stroup; Anthony Petrosino; Corey Brady; Karen Duseau – North American Chapter of the International Group for the Psychology of Mathematics Education, 2023
Tests of statistical significance often play a decisive role in establishing the empirical warrant of evidence-based research in education. The results from pattern-based assessment items, as introduced in this paper, are categorical and multimodal and do not immediately support the use of measures of central tendency as typically related to…
Descriptors: Statistical Significance, Comparative Analysis, Research Methodology, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Abutabenjeh, Sawsan; Jaradat, Raed – Teaching Public Administration, 2018
Research design is a critical topic that is central to research studies in science, social science, and many other disciplines. After identifying the research topic and formulating questions, selecting the appropriate design is perhaps the most important decision a researcher makes. Currently, there is a plethora of literature presenting multiple…
Descriptors: Research Design, Research Methodology, Comparative Analysis, Public Administration
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol – Educational Measurement: Issues and Practice, 2016
The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…
Descriptors: Test Bias, Research Methodology, Evaluation Methods, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Turner, David A. – Compare: A Journal of Comparative and International Education, 2017
In his proposal for comparative education, Marc Antoinne Jullien de Paris argues that the comparative method offers a viable alternative to the experimental method. In an experiment, the scientist can manipulate the variables in such a way that he or she can see any possible combination of variables at will. In comparative education, or in…
Descriptors: Comparative Education, Comparative Analysis, Research Methodology, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Thiem, Alrik – American Journal of Evaluation, 2017
The search for necessary and sufficient causes of some outcome of interest, referred to as "configurational comparative research," has long been one of the main preoccupations of evaluation scholars and practitioners. However, only the last three decades have witnessed the evolution of a set of formal methods that are sufficiently…
Descriptors: Qualitative Research, Research Methodology, Comparative Analysis, Tutorial Programs
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chase, Manisha Kaur – Research & Practice in Assessment, 2020
Traditional classroom assessment practice often leaves students out of the conversation, exacerbating the unequal power distribution in the classroom. Viewing classrooms as autonomy-inhibiting is known to influence students' psychosocial wellbeing as well as their academic achievement. This is especially relevant in STEM fields where marginalized…
Descriptors: STEM Education, Pilot Projects, Intervention, Pretests Posttests
Peer reviewed Peer reviewed
Direct linkDirect link
St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D. – Journal of Educational and Behavioral Statistics, 2016
We explore the conditions under which short, comparative interrupted time-series (CITS) designs represent valid alternatives to randomized experiments in educational evaluations. To do so, we conduct three within-study comparisons, each of which uses a unique data set to test the validity of the CITS design by comparing its causal estimates to…
Descriptors: Research Methodology, Randomized Controlled Trials, Comparative Analysis, Time
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Maggin, Daniel M.; Briesch, Amy M.; Chafouleas, Sandra M.; Ferguson, Tyler D.; Clark, Courtney – Journal of Behavioral Education, 2014
The use of single-case research methods for validating academic and behavioral interventions has gained considerable attention in recent years. As such, there has been a proliferation of methods for evaluating whether, and to what extent, primary research reports provide evidence of intervention effectiveness. Despite the recent interest in…
Descriptors: Scoring Rubrics, Research Methodology, Intervention, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Heyvaert, Mieke; Wendt, Oliver; Van den Noortgate, Wim; Onghena, Patrick – Journal of Special Education, 2015
Reporting standards and critical appraisal tools serve as beacons for researchers, reviewers, and research consumers. Parallel to existing guidelines for researchers to report and evaluate group-comparison studies, single-case experimental (SCE) researchers are in need of guidelines for reporting and evaluating SCE studies. A systematic search was…
Descriptors: Standards, Research Methodology, Comparative Analysis, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Berliner, David C. – Teachers College Record, 2015
Trying to understand PISA is analogous to the parable of the blind men and the elephant. There are many facets of the PISA program, and thus many ways to both applaud and critique this ambitious international program of assessment that has gained enormous importance in the crafting of contemporary educational policy. One of the facets discussed in…
Descriptors: Achievement Tests, Standardized Tests, Educational Assessment, Educational Indicators
Peer reviewed Peer reviewed
Direct linkDirect link
Lei, Wu; Qing, Fang; Zhou, Jin – International Journal of Distance Education Technologies, 2016
There are usually limited user evaluation of resources on a recommender system, which caused an extremely sparse user rating matrix, and this greatly reduce the accuracy of personalized recommendation, especially for new users or new items. This paper presents a recommendation method based on rating prediction using causal association rules.…
Descriptors: Causal Models, Attribution Theory, Correlation, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  15