NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
Georgia Criterion Referenced…1
What Works Clearinghouse Rating
Showing 1 to 15 of 52 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Fu, Qiang; Guo, Xin; Land, Kenneth C. – Sociological Methods & Research, 2020
Count responses with grouping and right censoring have long been used in surveys to study a variety of behaviors, status, and attitudes. Yet grouping or right-censoring decisions of count responses still rely on arbitrary choices made by researchers. We develop a new method for evaluating grouping and right-censoring decisions of count responses…
Descriptors: Surveys, Artificial Intelligence, Evaluation Methods, Probability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Finucane, Mariel; Thal, Daniel – National Center for Education Evaluation and Regional Assistance, 2022
BASIE is a framework for interpreting impact estimates from evaluations. It is an alternative to null hypothesis significance testing. This guide walks researchers through the key steps of applying BASIE, including selecting prior evidence, reporting impact estimates, interpreting impact estimates, and conducting sensitivity analyses. The guide…
Descriptors: Bayesian Statistics, Educational Research, Data Interpretation, Hypothesis Testing
Greifer, Noah – ProQuest LLC, 2018
There has been some research in the use of propensity scores in the context of measurement error in the confounding variables; one recommended method is to generate estimates of the mis-measured covariate using a latent variable model, and to use those estimates (i.e., factor scores) in place of the covariate. I describe a simulation study…
Descriptors: Evaluation Methods, Probability, Scores, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Henman, Paul; Brown, Scott D.; Dennis, Simon – Australian Universities' Review, 2017
In 2015, the Australian Government's Excellence in Research for Australia (ERA) assessment of research quality declined to rate 1.5 per cent of submissions from universities. The public debate focused on practices of gaming or "coding errors" within university submissions as the reason for this outcome. The issue was about the…
Descriptors: Rating Scales, Foreign Countries, Universities, Achievement Rating
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Porter, Kristin E. – Society for Research on Educational Effectiveness, 2016
In recent years, there has been increasing focus on the issue of multiple hypotheses testing in education evaluation studies. In these studies, researchers are typically interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time or across multiple treatment groups. When…
Descriptors: Hypothesis Testing, Intervention, Error Patterns, Evaluation Methods
Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon – American Journal of Evaluation, 2018
To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…
Descriptors: Bayesian Statistics, Evaluation Methods, Statistical Analysis, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kovalchik, Stephanie A.; Martino, Steven C.; Collins, Rebecca L.; Shadel, William G.; D'Amico, Elizabeth J.; Becker, Kirsten – Journal of Educational and Behavioral Statistics, 2018
Ecological momentary assessment (EMA) is a popular assessment method in psychology that aims to capture events, emotions, and cognitions in real time, usually repeatedly throughout the day. Because EMA typically involves more intensive monitoring than traditional assessment methods, missing data are commonly an issue and this missingness may bias…
Descriptors: Probability, Statistical Bias, Holistic Approach, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Haberman, Shelby J.; Lee, Yi-Hsuan – ETS Research Report Series, 2017
In investigations of unusual testing behavior, a common question is whether a specific pattern of responses occurs unusually often within a group of examinees. In many current tests, modern communication techniques can permit quite large numbers of examinees to share keys, or common response patterns, to the entire test. To address this issue,…
Descriptors: Student Evaluation, Testing, Item Response Theory, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ding, Peng; Feller, Avi; Miratrix, Luke – Society for Research on Educational Effectiveness, 2015
Recent literature has underscored the critical role of treatment effect variation in estimating and understanding causal effects. This approach, however, is in contrast to much of the foundational research on causal inference. Linear models, for example, classically rely on constant treatment effect assumptions, or treatment effects defined by…
Descriptors: Causal Models, Randomized Controlled Trials, Statistical Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Wendy – Journal of Research on Educational Effectiveness, 2017
Recent methods to improve generalizations from nonrandom samples typically invoke assumptions such as the strong ignorability of sample selection, which is challenging to meet in practice. Although researchers acknowledge the difficulty in meeting this assumption, point estimates are still provided and used without considering alternative…
Descriptors: Generalization, Inferences, Probability, Educational Research
Cecile C. Dietrich; Eric J. Lichtenberger – Sage Research Methods Cases, 2016
We present a case study of the process through which a methodology was developed and applied to a quasi-experimental research study that employed propensity score matching. Methodological decisions are discussed and summarized, including an explanation of the approaches selected for each step in the study as well as rationales for these…
Descriptors: Test Construction, Quasiexperimental Design, Community Colleges, Fees
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Jacovidis, Jessica N.; Foelber, Kelly J.; Horst, S. Jeanne – Journal of Experimental Education, 2017
Often program administrators are interested in knowing how students benefit from participation in programs compared to students who do not participate. Such comparisons may be sullied by the fact that participants self-select into programs, resulting in differences between groups prior to programming. By controlling for…
Descriptors: Probability, Scores, Statistical Analysis, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Shieh, Gwowen – Journal of Experimental Education, 2015
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
Descriptors: Statistical Analysis, Sample Size, Computation, Effect Size
Ostrow, Korinn; Donnelly, Chistopher; Heffernan, Neil – International Educational Data Mining Society, 2015
As adaptive tutoring systems grow increasingly popular for the completion of classwork and homework, it is crucial to assess the manner in which students are scored within these platforms. The majority of systems, including ASSISTments, return the binary correctness of a student's first attempt at solving each problem. Yet for many teachers,…
Descriptors: Intelligent Tutoring Systems, Scoring, Testing, Credits
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4