NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 49 results Save | Export
Kylie L. Anglin – Annenberg Institute for School Reform at Brown University, 2025
Since 2018, institutions of higher education have been aware of the "enrollment cliff" which refers to expected declines in future enrollment. This paper attempts to describe how prepared institutions in Ohio are for this future by looking at trends leading up to the anticipated decline. Using IPEDS data from 2012-2022, we analyze trends…
Descriptors: Validity, Artificial Intelligence, Models, Best Practices
Peer reviewed Peer reviewed
Direct linkDirect link
Parkkinen, Veli-Pekka; Baumgartner, Michael – Sociological Methods & Research, 2023
In recent years, proponents of configurational comparative methods (CCMs) have advanced various dimensions of robustness as instrumental to model selection. But these robustness considerations have not led to computable robustness measures, and they have typically been applied to the analysis of real-life data with unknown underlying causal…
Descriptors: Robustness (Statistics), Comparative Analysis, Causal Models, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Marcoulides, Katerina M.; Yuan, Ke-Hai – International Journal of Research & Method in Education, 2020
Multilevel structural equation models (MSEM) are typically evaluated on the basis of goodness of fit indices. A problem with these indices is that they pertain to the entire model, reflecting simultaneously the degree of fit for all levels in the model. Consequently, in cases that lack model fit, it is unclear which level model is misspecified.…
Descriptors: Goodness of Fit, Structural Equation Models, Correlation, Inferences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Weidlich, Joshua; Gaševic, Dragan; Drachsler, Hendrik – Journal of Learning Analytics, 2022
As a research field geared toward understanding and improving learning, Learning Analytics (LA) must be able to provide empirical support for causal claims. However, as a highly applied field, tightly controlled randomized experiments are not always feasible nor desirable. Instead, researchers often rely on observational data, based on which they…
Descriptors: Causal Models, Inferences, Learning Analytics, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Marmolejo-Ramos, Fernando; Cousineau, Denis – Educational and Psychological Measurement, 2017
The number of articles showing dissatisfaction with the null hypothesis statistical testing (NHST) framework has been progressively increasing over the years. Alternatives to NHST have been proposed and the Bayesian approach seems to have achieved the highest amount of visibility. In this last part of the special issue, a few alternative…
Descriptors: Hypothesis Testing, Bayesian Statistics, Evaluation Methods, Statistical Inference
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Finch, Holmes – Practical Assessment, Research & Evaluation, 2022
Researchers in many disciplines work with ranking data. This data type is unique in that it is often deterministic in nature (the ranks of items "k"-1 determine the rank of item "k"), and the difference in a pair of rank scores separated by "k" units is equivalent regardless of the actual values of the two ranks in…
Descriptors: Data Analysis, Statistical Inference, Models, College Faculty
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Troy, Jesse D.; Neely, Megan L.; Pomann, Gina-Maria; Grambow, Steven C.; Samsa, Gregory P. – Journal of Curriculum and Teaching, 2022
Student evaluation is a key consideration for educational program administrators because program success depends on students' ability to demonstrate successful development of core competencies. Student evaluations must therefore be aligned with learning objectives and overall program goals. Graduate level educational programs typically incorporate…
Descriptors: Student Evaluation, Evaluation Methods, Statistics Education, Alignment (Education)
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hodkowski, Nicola M.; Gardner, Amber; Jorgensen, Cody; Hornbein, Peter; Johnson, Heather L.; Tzur, Ron – North American Chapter of the International Group for the Psychology of Mathematics Education, 2016
In this paper we examine the application of Tzur's (2007) fine-grained assessment to the design of an assessment measure of a particular multiplicative scheme so that non-interview, good enough data can be obtained (on a large scale) to infer into elementary students' reasoning. We outline three design principles that surfaced through our recent…
Descriptors: Elementary School Students, Mathematics Instruction, Multiplication, Thinking Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Sloman, Steven A. – Cognitive Science, 2013
Judea Pearl won the 2010 Rumelhart Prize in computational cognitive science due to his seminal contributions to the development of Bayes nets and causal Bayes nets, frameworks that are central to multiple domains of the computational study of mind. At the heart of the causal Bayes nets formalism is the notion of a counterfactual, a representation…
Descriptors: Causal Models, Cognitive Psychology, Cognitive Science, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Callister Everson, Kimberlee; Feinauer, Erika; Sudweeks, Richard R. – Harvard Educational Review, 2013
In this article, the authors provide a methodological critique of the current standard of value-added modeling forwarded in educational policy contexts as a means of measuring teacher effectiveness. Conventional value-added estimates of teacher quality are attempts to determine to what degree a teacher would theoretically contribute, on average,…
Descriptors: Teacher Evaluation, Teacher Effectiveness, Evaluation Methods, Accountability
Funnell, Sue C.; Rogers, Patricia J. – Jossey-Bass, An Imprint of Wiley, 2011
Between good intentions and great results lies a program theory--not just a list of tasks but a vision of what needs to happen, and how. Now widely used in government and not-for-profit organizations, program theory provides a coherent picture of how change occurs and how to improve performance. "Purposeful Program Theory" shows how to develop,…
Descriptors: Models, Logical Thinking, Evaluation Methods, Program Evaluation
Levy, Roy – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2014
Digital games offer an appealing environment for assessing student proficiencies, including skills and misconceptions in a diagnostic setting. This paper proposes a dynamic Bayesian network modeling approach for observations of student performance from an educational video game. A Bayesian approach to model construction, calibration, and use in…
Descriptors: Video Games, Educational Games, Bayesian Statistics, Observation
Peer reviewed Peer reviewed
Direct linkDirect link
Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff – Career and Technical Education Research, 2012
Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…
Descriptors: Vocational Education, Effect Size, Intervals, Self Esteem
Peer reviewed Peer reviewed
Direct linkDirect link
Ruiz-Primo, Maria Araceli; Li, Min; Wills, Kellie; Giamellaro, Michael; Lan, Ming-Chih; Mason, Hillary; Sands, Deanna – Journal of Research in Science Teaching, 2012
The purpose of this article is to address a major gap in the instructional sensitivity literature on how to develop instructionally sensitive assessments. We propose an approach to developing and evaluating instructionally sensitive assessments in science and test this approach with one elementary life-science module. The assessment we developed…
Descriptors: Effect Size, Inferences, Student Centered Curriculum, Test Construction
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4