Publication Date
In 2025 | 2 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 43 |
Since 2016 (last 10 years) | 374 |
Since 2006 (last 20 years) | 1115 |
Descriptor
Evaluation Methods | 1884 |
Statistical Analysis | 1884 |
Foreign Countries | 389 |
Research Methodology | 307 |
Comparative Analysis | 287 |
Models | 246 |
Program Evaluation | 240 |
Student Evaluation | 226 |
Qualitative Research | 221 |
Higher Education | 211 |
Correlation | 201 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Researchers | 66 |
Practitioners | 40 |
Administrators | 20 |
Teachers | 15 |
Policymakers | 7 |
Students | 7 |
Counselors | 3 |
Media Staff | 1 |
Support Staff | 1 |
Location
United Kingdom | 33 |
Australia | 32 |
Turkey | 28 |
Florida | 23 |
California | 20 |
Iran | 19 |
United States | 18 |
Canada | 16 |
Netherlands | 14 |
Germany | 13 |
Pennsylvania | 13 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Does not meet standards | 2 |
Lingbo Tong; Wen Qu; Zhiyong Zhang – Grantee Submission, 2025
Factor analysis is widely utilized to identify latent factors underlying the observed variables. This paper presents a comprehensive comparative study of two widely used methods for determining the optimal number of factors in factor analysis, the K1 rule, and parallel analysis, along with a more recently developed method, the bass-ackward method.…
Descriptors: Factor Analysis, Monte Carlo Methods, Statistical Analysis, Sample Size
Elayne P. Colón; Lori M. Dassa; Thomas M. Dana; Nathan P. Hanson – Action in Teacher Education, 2024
To meet accreditation expectations, teacher preparation programs must demonstrate their candidates are evaluated using summative assessment tools that yield sound, reliable, and valid data. These tools are primarily used by the clinical experience team -- university supervisors and mentor teachers. Institutional beliefs regarding best practices…
Descriptors: Student Teachers, Teacher Interns, Evaluation Methods, Interrater Reliability
Yan Xia; Xinchang Zhou – Educational and Psychological Measurement, 2025
Parallel analysis has been considered one of the most accurate methods for determining the number of factors in factor analysis. One major advantage of parallel analysis over traditional factor retention methods (e.g., Kaiser's rule) is that it addresses the sampling variability of eigenvalues obtained from the identity matrix, representing the…
Descriptors: Factor Analysis, Statistical Analysis, Evaluation Methods, Sampling
Lanqin Zheng; Zichen Huang; Yang Liu – Journal of Learning for Development, 2024
In recent years, the growing incidence of blended and online learning has highlighted instructional design concerns, especially STEM instructional design. Existing studies have often adopted observations, questionnaires, or interviews to evaluate STEM instructional design plans. However, there is still a lack of quantitative, measurable, and…
Descriptors: STEM Education, Preservice Teachers, Information Transfer, Statistical Analysis
Akdere, Mesut; Jiang, Yeling; Lobo, Flavio Destri – European Journal of Training and Development, 2022
Purpose: As new technologies such as immersive and augmented platforms emerge, training approaches are also transforming. The virtual reality (VR) platform provides a completely immersive learning experience for simulated training. Despite increased prevalence of these technologies, the extent literature is lagging behind in terms of evaluating…
Descriptors: Training, Computer Simulation, Educational Technology, Program Evaluation
Sims, Sam; Anders, Jake; Zieger, Laura – Journal of Research on Educational Effectiveness, 2022
Comparative interrupted time series (CITS) designs evaluate impact by modeling the relative deviation from trends among a treatment and comparison group after an intervention. The broad applicability of the design means it is widely used in education research. Like all non-experimental evaluation methods however, the internal validity of a given…
Descriptors: Validity, Comparative Analysis, Statistical Analysis, Intervention
Schamberger, Tamara; Schuberth, Florian; Henseler, Jörg – International Journal of Behavioral Development, 2023
Research in human development often relies on composites, that is, composed variables such as indices. Their composite nature renders these variables inaccessible to conventional factor-centric psychometric validation techniques such as confirmatory factor analysis (CFA). In the context of human development research, there is currently no…
Descriptors: Individual Development, Factor Analysis, Statistical Analysis, Structural Equation Models
Tan, Teck Kiang – Practical Assessment, Research & Evaluation, 2023
Researchers often have hypotheses concerning the state of affairs in the population from which they sampled their data to compare group means. The classical frequentist approach provides one way of carrying out hypothesis testing using ANOVA to state the null hypothesis that there is no difference in the means and proceed with multiple comparisons…
Descriptors: Comparative Analysis, Hypothesis Testing, Statistical Analysis, Guidelines
Holcomb, T. Scott; Lambert, Richard; Bottoms, Bryndle L. – Journal of Educational Supervision, 2022
In this study, various statistical indexes of agreement were calculated using empirical data from a group of evaluators (n = 45) of early childhood teachers. The group of evaluators rated ten fictitious teacher profiles using the North Carolina Teacher Evaluation Process (NCTEP) rubric. The exact and adjacent agreement percentages were calculated…
Descriptors: Interrater Reliability, Teacher Evaluation, Statistical Analysis, Early Childhood Teachers
Reichardt, Charles S. – American Journal of Evaluation, 2022
Evaluators are often called upon to assess the effects of programs. To assess a program effect, evaluators need a clear understanding of how a program effect is defined. Arguably, the most widely used definition of a program effect is the counterfactual one. According to the counterfactual definition, a program effect is the difference between…
Descriptors: Program Evaluation, Definitions, Causal Models, Evaluation Methods
Bonifay, Wes – Grantee Submission, 2022
Traditional statistical model evaluation typically relies on goodness-of-fit testing and quantifying model complexity by counting parameters. Both of these practices may result in overfitting and have thereby contributed to the generalizability crisis. The information-theoretic principle of minimum description length addresses both of these…
Descriptors: Statistical Analysis, Models, Goodness of Fit, Evaluation Methods
Han, Jia-Xuan; Ma, Min-Yuan – Education Sciences, 2019
With the rapid development of online courses, digital learning has become a global trend. In this context, this study analyzed the high intake population of online courses for online affective cognition, and explored what the user's attraction factors for online courses are. The key factors that affect consumers' usage of online courses and the…
Descriptors: Online Courses, Correlation, Teaching Methods, Educational Trends
Paul J. Dizona – ProQuest LLC, 2022
Missing data is a common challenge to any researcher in almost any field of research. In particular, human participants in research do not always respond or return for assessments leaving the researcher to rely on missing data methods. The most common methods (i.e., Multiple Imputation and Full Information Maximum Likelihood) assume that the…
Descriptors: Pretests Posttests, Research Design, Research Problems, Dropouts
Emma Somer; Carl Falk; Milica Miocevic – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Factor Score Regression (FSR) is increasingly employed as an alternative to structural equation modeling (SEM) in small samples. Despite its popularity in psychology, the performance of FSR in multigroup models with small samples remains relatively unknown. The goal of this study was to examine the performance of FSR, namely Croon's correction and…
Descriptors: Scores, Structural Equation Models, Comparative Analysis, Sample Size
Olanipekun, Oluwaseun L.; Zhao, JuLong; Wang, Rongdong; A. Sedory, Stephen; Singh, Sarjinder – Sociological Methods & Research, 2023
In carrying out surveys involving sensitive characteristics, randomized response models have been considered among the best techniques since they provide the maximum privacy protection to the respondents and procure honest responses. Over the years, researchers have carried out studies on the estimation of proportions of the population possessing…
Descriptors: Correlation, Smoking, Thinking Skills, Health Behavior