Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Bayesian Statistics | 2 |
Data Analysis | 2 |
Program Effectiveness | 2 |
Alternative Assessment | 1 |
Case Studies | 1 |
Comparative Analysis | 1 |
Computer Uses in Education | 1 |
Demonstration Programs | 1 |
Educational Research | 1 |
Educational Technology | 1 |
Evaluation Problems | 1 |
More ▼ |
Author
Anderson, Billie S. | 1 |
Crawford, Myra A. | 1 |
Dailey, Matthew D. | 1 |
Hardin, J. Michael | 1 |
Heffernan, Neil T. | 1 |
Pardos, Zachary A. | 1 |
Russell, Toya V. | 1 |
Woodby, Lesa L. | 1 |
Publication Type
Journal Articles | 2 |
Reports - Research | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Pardos, Zachary A.; Dailey, Matthew D.; Heffernan, Neil T. – International Journal of Artificial Intelligence in Education, 2011
The well established, gold standard approach to finding out what works in education research is to run a randomized controlled trial (RCT) using a standard pre-test and post-test design. RCTs have been used in the intelligent tutoring community for decades to determine which questions and tutorial feedback work best. Practically speaking, however,…
Descriptors: Feedback (Response), Intelligent Tutoring Systems, Pretests Posttests, Educational Research
Hardin, J. Michael; Anderson, Billie S.; Woodby, Lesa L.; Crawford, Myra A.; Russell, Toya V. – Evaluation Review, 2008
This article explores the statistical methodologies used in demonstration and effectiveness studies when the treatments are applied across multiple settings. The importance of evaluating and how to evaluate these types of studies are discussed. As an alternative to standard methodology, the authors of this article offer an empirical binomial…
Descriptors: Bayesian Statistics, Alternative Assessment, Data Analysis, Statistical Studies