NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Laws, Policies, & Programs
Assessments and Surveys
Program for International…1
What Works Clearinghouse Rating
Showing 1 to 15 of 34 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lu Qin; Shishun Zhao; Wenlai Guo; Tiejun Tong; Ke Yang – Research Synthesis Methods, 2024
The application of network meta-analysis is becoming increasingly widespread, and for a successful implementation, it requires that the direct comparison result and the indirect comparison result should be consistent. Because of this, a proper detection of inconsistency is often a key issue in network meta-analysis as whether the results can be…
Descriptors: Meta Analysis, Network Analysis, Bayesian Statistics, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tan, Teck Kiang – Practical Assessment, Research & Evaluation, 2023
Researchers often have hypotheses concerning the state of affairs in the population from which they sampled their data to compare group means. The classical frequentist approach provides one way of carrying out hypothesis testing using ANOVA to state the null hypothesis that there is no difference in the means and proceed with multiple comparisons…
Descriptors: Comparative Analysis, Hypothesis Testing, Statistical Analysis, Guidelines
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-Hwane; Lee, Philseok – Journal of Educational Measurement, 2022
Abstract This study proposes a new Bayesian differential item functioning (DIF) detection method using posterior predictive model checking (PPMC). Item fit measures including infit, outfit, observed score distribution (OSD), and Q1 were considered as discrepancy statistics for the PPMC DIF methods. The performance of the PPMC DIF method was…
Descriptors: Test Items, Bayesian Statistics, Monte Carlo Methods, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Educational Researcher, 2019
A recent paper uses Bayes factors to argue a large minority of rigorous, large-scale education RCTs are "uninformative." The definition of "uninformative" depends on the authors' hypothesis choices for calculating Bayes factors. These arguably overadjust for effect size inflation and involve a fixed prior distribution,…
Descriptors: Randomized Controlled Trials, Bayesian Statistics, Educational Research, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Wilcox, Rand R.; Serang, Sarfaraz – Educational and Psychological Measurement, 2017
The article provides perspectives on p values, null hypothesis testing, and alternative techniques in light of modern robust statistical methods. Null hypothesis testing and "p" values can provide useful information provided they are interpreted in a sound manner, which includes taking into account insights and advances that have…
Descriptors: Hypothesis Testing, Bayesian Statistics, Computation, Effect Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Henman, Paul; Brown, Scott D.; Dennis, Simon – Australian Universities' Review, 2017
In 2015, the Australian Government's Excellence in Research for Australia (ERA) assessment of research quality declined to rate 1.5 per cent of submissions from universities. The public debate focused on practices of gaming or "coding errors" within university submissions as the reason for this outcome. The issue was about the…
Descriptors: Rating Scales, Foreign Countries, Universities, Achievement Rating
Peer reviewed Peer reviewed
Direct linkDirect link
Vrieze, Scott I. – Psychological Methods, 2012
This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important…
Descriptors: Factor Analysis, Statistical Analysis, Psychology, Interviews
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Lijuan; Hamaker, Ellen; Bergeman, C. S. – Psychological Methods, 2012
Intra-individual variability over a short period of time may contain important information about how individuals differ from each other. In this article we begin by discussing diverse indicators for quantifying intra-individual variability and indicate their advantages and disadvantages. Then we propose an alternative method that models…
Descriptors: Evaluation Methods, Data Analysis, Individual Differences, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lu, Hongjing; Chen, Dawn; Holyoak, Keith J. – Psychological Review, 2012
How can humans acquire relational representations that enable analogical inference and other forms of high-level reasoning? Using comparative relations as a model domain, we explore the possibility that bottom-up learning mechanisms applied to objects coded as feature vectors can yield representations of relations sufficient to solve analogy…
Descriptors: Inferences, Thinking Skills, Comparative Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Kuiper, Rebecca M.; Hoijtink, Herbert – Psychological Methods, 2010
This article discusses comparisons of means using exploratory and confirmatory approaches. Three methods are discussed: hypothesis testing, model selection based on information criteria, and Bayesian model selection. Throughout the article, an example is used to illustrate and evaluate the two approaches and the three methods. We demonstrate that…
Descriptors: Models, Testing, Hypothesis Testing, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Perfors, Amy; Tenenbaum, Joshua B.; Griffiths, Thomas L.; Xu, Fei – Cognition, 2011
We present an introduction to Bayesian inference as it is used in probabilistic models of cognitive development. Our goal is to provide an intuitive and accessible guide to the "what", the "how", and the "why" of the Bayesian approach: what sorts of problems and data the framework is most relevant for, and how and why it may be useful for…
Descriptors: Bayesian Statistics, Cognitive Psychology, Inferences, Cognitive Development
Peer reviewed Peer reviewed
Direct linkDirect link
Klugkist, Irene; van Wesel, Floryt; Bullens, Jessie – International Journal of Behavioral Development, 2011
Null hypothesis testing (NHT) is the most commonly used tool in empirical psychological research even though it has several known limitations. It is argued that since the hypotheses evaluated with NHT do not reflect the research-question or theory of the researchers, conclusions from NHT must be formulated with great modesty, that is, they cannot…
Descriptors: Psychological Studies, Hypothesis Testing, Researchers, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Jenkins, Melissa M.; Youngstrom, Eric A.; Youngstrom, Jennifer Kogos; Feeny, Norah C.; Findling, Robert L. – Psychological Assessment, 2012
Bipolar disorder is frequently clinically diagnosed in youths who do not actually satisfy Diagnostic and Statistical Manual of Mental Disorders (4th ed., text revision; DSM-IV-TR; American Psychiatric Association, 1994) criteria, yet cases that would satisfy full DSM-IV-TR criteria are often undetected clinically. Evidence-based assessment methods…
Descriptors: Evidence, Mental Health, Mental Disorders, Clinical Diagnosis
Peer reviewed Peer reviewed
Direct linkDirect link
Maraun, Michael; Gabriel, Stephanie – Psychological Methods, 2010
In his article, "An Alternative to Null-Hypothesis Significance Tests," Killeen (2005) urged the discipline to abandon the practice of "p[subscript obs]"-based null hypothesis testing and to quantify the signal-to-noise characteristics of experimental outcomes with replication probabilities. He described the coefficient that he…
Descriptors: Hypothesis Testing, Statistical Inference, Probability, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Ruscio, John – Assessment, 2009
Determining whether individuals belong to different latent classes (taxa) or vary along one or more latent factors (dimensions) has implications for assessment. For example, no instrument can simultaneously maximize the efficiency of categorical and continuous measurement. Methods such as taxometric analysis can test the relative fit of taxonic…
Descriptors: Classification, Measurement, Measurement Techniques, Evaluation Research
Previous Page | Next Page ยป
Pages: 1  |  2  |  3