NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 1 to 15 of 307 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lingbo Tong; Wen Qu; Zhiyong Zhang – Grantee Submission, 2025
Factor analysis is widely utilized to identify latent factors underlying the observed variables. This paper presents a comprehensive comparative study of two widely used methods for determining the optimal number of factors in factor analysis, the K1 rule, and parallel analysis, along with a more recently developed method, the bass-ackward method.…
Descriptors: Factor Analysis, Monte Carlo Methods, Statistical Analysis, Sample Size
Paul J. Dizona – ProQuest LLC, 2022
Missing data is a common challenge to any researcher in almost any field of research. In particular, human participants in research do not always respond or return for assessments leaving the researcher to rely on missing data methods. The most common methods (i.e., Multiple Imputation and Full Information Maximum Likelihood) assume that the…
Descriptors: Pretests Posttests, Research Design, Research Problems, Dropouts
Peer reviewed Peer reviewed
Direct linkDirect link
De Jesus Alvares Mendes Junior, Isaias; Alves, Maria Do Céu – Cogent Education, 2023
Several studies with quantitative, qualitative or theoretical approaches have been carried out, focusing on the application of the Balanced Scorecard (BSC) in the educational sector. However, despite the literature on the subject, it is difficult to draw an overview of the use of the BSC in this sector. In order to fill this gap, our research…
Descriptors: Evaluation Methods, Statistical Analysis, Research Reports, Databases
Peer reviewed Peer reviewed
Direct linkDirect link
Heckman, Sarah; Carver, Jeffrey C.; Sherriff, Mark; Al-zubidy, Ahmed – ACM Transactions on Computing Education, 2022
Context: Computing Education Research (CER) is critical to help the computing education community and policy makers support the increasing population of students who need to learn computing skills for future careers. For a community to systematically advance knowledge about a topic, the members must be able to understand published work thoroughly…
Descriptors: Computer Science Education, Educational Research, Periodicals, Replication (Evaluation)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gunduz, Ali; Gunduzalp, Cengiz; Kocka, Ömer; Goktas, Yüksel – Participatory Educational Research, 2023
This study investigates the trends of doctoral dissertations produced in Türkiye in the field of Educational Technologies. 292 doctoral dissertations were included in this study which was conducted in the design of document analysis in the period between 2011 and 2020. The dissertations were put to content analysis by using the Dissertation…
Descriptors: Educational Technology, Educational Research, Trend Analysis, Content Analysis
Jacob M. Schauer; Kaitlyn G. Fitzgerald; Sarah Peko-Spicer; Mena C. R. Whalen; Rrita Zejnullahi; Larry V. Hedges – Grantee Submission, 2021
Several programs of research have sought to assess the replicability of scientific findings in different fields, including economics and psychology. These programs attempt to replicate several findings and use the results to say something about large-scale patterns of replicability in a field. However, little work has been done to understand the…
Descriptors: Statistical Analysis, Research Methodology, Evaluation Methods, Replication (Evaluation)
Peer reviewed Peer reviewed
Direct linkDirect link
Mau, Steffen – International Studies in Sociology of Education, 2020
The process of quantification is a powerful development shaping many domains of life today. In the area of education, for example, performance measurement, testing and ranking have become common tools of governance. Quantification is not a neutral way of describing society, but a process of valorisation. It has three sociologically relevant…
Descriptors: Statistical Analysis, Social Influences, Research Methodology, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Margulieux, Lauren; Ketenci, Tuba Ayer; Decker, Adrienne – Computer Science Education, 2019
Background and context: The variables that researchers measure and how they measure them are central in any area of research, including computing education. Which research questions can be asked and how they are answered depends on measurement. Objective: To summarize the commonly used variables and measurements in computing education and to…
Descriptors: Measurement Techniques, Standards, Evaluation Methods, Computer Science Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kenneth R. Jones; Eugenia P. Gwynn; Allison M. Teeter – Journal of Human Sciences & Extension, 2019
This article provides insight into how an adequate approach to selecting methods can establish credible and actionable evidence. The authors offer strategies to effectively support Extension professionals, including program developers and evaluators, in being more deliberate when selecting appropriate qualitative and quantitative methods. In…
Descriptors: Evaluation Methods, Credibility, Evidence, Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Duncan, Dustin T.; Kapadia, Farzana; Kirchner, Thomas R.; Goedel, William C.; Brady, William J.; Halkitis, Perry N. – Journal of LGBT Youth, 2017
The study evaluated the acceptability of text message- and voice-based ecological momentary assessment (EMA) methods among a sample (N = 74) of young men who have sex with men (MSM). We assessed the acceptability of text message- and voice-based EMA methods. Almost all participants (96%) reported that they would be willing to accept texts on their…
Descriptors: Males, Homosexuality, Evaluation Methods, Telecommunications
Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon – American Journal of Evaluation, 2018
To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…
Descriptors: Bayesian Statistics, Evaluation Methods, Statistical Analysis, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ades, A. E.; Lu, Guobing; Dias, Sofia; Mayo-Wilson, Evan; Kounali, Daphne – Research Synthesis Methods, 2015
Objective: Trials often may report several similar outcomes measured on different test instruments. We explored a method for synthesising treatment effect information both within and between trials and for reporting treatment effects on a common scale as an alternative to standardisation Study design: We applied a procedure that simultaneously…
Descriptors: Research Methodology, Evaluation Methods, Metabolism, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Pavelko, Stacey L.; Owens, Robert E., Jr. – Language, Speech, and Hearing Services in Schools, 2017
Purpose: The purpose of this study was to document whether mean length of utterance (MLU[subscript S]), total number of words (TNW), clauses per sentence (CPS), and/or words per sentence (WPS) demonstrated age-related changes in children with typical language and to document the average time to collect, transcribe, and analyze conversational…
Descriptors: Speech Acts, Sentences, Grammar, Children
Peer reviewed Peer reviewed
Direct linkDirect link
Solomon, Benjamin G.; Howard, Taylor K.; Stein, Brit'ny L. – Journal of Behavioral Education, 2015
The use of single-case effect sizes (SCESs) has increased in the intervention literature. Meta-analyses based on single-case data have also increased in popularity. However, few researchers who have adopted these metrics have provided an adequate rationale for their selection. We review several important statistical assumptions that should be…
Descriptors: Effect Size, Intervention, Statistical Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Knook, Jorie; Eory, Vera; Brander, Matthew; Moran, Dominic – Journal of Agricultural Education and Extension, 2018
Purpose: Participatory extension programmes are widely used to promote change in the agricultural sector, and an important question is how best to measure the effectiveness of such programmes after implementation. This study seeks to understand the current state of practice through a review of ex post evaluations of participatory extension…
Descriptors: Extension Education, Agricultural Occupations, Program Evaluation, Program Effectiveness
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  21