Publication Date
In 2025 | 0 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 9 |
Descriptor
Computer Software | 9 |
Randomized Controlled Trials | 9 |
Artificial Intelligence | 3 |
Computation | 3 |
Evaluation Methods | 3 |
Sample Size | 3 |
Statistical Analysis | 3 |
Accuracy | 2 |
Causal Models | 2 |
Correlation | 2 |
Error of Measurement | 2 |
More ▼ |
Source
Grantee Submission | 2 |
Research Synthesis Methods | 2 |
Annenberg Institute for… | 1 |
Education Sciences | 1 |
Journal of Computing in… | 1 |
Journal of Research on… | 1 |
Society for Research on… | 1 |
Author
Adam Sales | 1 |
Angela C. Webster | 1 |
Angie Barba | 1 |
Anna Lene Seidler | 1 |
Ben W. Mol | 1 |
Benjamin Kelcey | 1 |
Chang Xu | 1 |
Dong, Nianbo | 1 |
Dung Pham | 1 |
Feinn, Richard | 1 |
Gerardo Sabater-Grande | 1 |
More ▼ |
Publication Type
Reports - Research | 7 |
Journal Articles | 5 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Connecticut | 1 |
Massachusetts (Boston) | 1 |
New York | 1 |
New York (New York) | 1 |
North Carolina | 1 |
Oregon | 1 |
Virginia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kylie E. Hunter; Mason Aberoumand; Sol Libesman; James X. Sotiropoulos; Jonathan G. Williams; Jannik Aagerup; Rui Wang; Ben W. Mol; Wentao Li; Angie Barba; Nipun Shrestha; Angela C. Webster; Anna Lene Seidler – Research Synthesis Methods, 2024
Increasing concerns about the trustworthiness of research have prompted calls to scrutinise studies' Individual Participant Data (IPD), but guidance on how to do this was lacking. To address this, we developed the IPD Integrity Tool to screen randomised controlled trials (RCTs) for integrity issues. Development of the tool involved a literature…
Descriptors: Integrity, Randomized Controlled Trials, Participant Characteristics, Computer Software
Nianbo Dong; Benjamin Kelcey; Jessaca Spybrook; Yanli Xie; Dung Pham; Peilin Qiu; Ning Sui – Grantee Submission, 2024
Multisite trials that randomize individuals (e.g., students) within sites (e.g., schools) or clusters (e.g., teachers/classrooms) within sites (e.g., schools) are commonly used for program evaluation because they provide opportunities to learn about treatment effects as well as their heterogeneity across sites and subgroups (defined by moderating…
Descriptors: Statistical Analysis, Randomized Controlled Trials, Educational Research, Effect Size
Maite Alguacil; Noemí Herranz-Zarzoso; José C. Pernías; Gerardo Sabater-Grande – Journal of Computing in Higher Education, 2024
Cheating in online exams without face-to-face proctoring has been a general concern for academic instructors during the crisis caused by COVID-19. The main goal of this work is to evaluate the cost of these dishonest practices by comparing the academic performance of webcam-proctored students and their unproctored peers in an online gradable test.…
Descriptors: Cheating, Computer Assisted Testing, Randomized Controlled Trials, Supervision
Yuan Tian; Xi Yang; Suhail A. Doi; Luis Furuya-Kanamori; Lifeng Lin; Joey S. W. Kwong; Chang Xu – Research Synthesis Methods, 2024
RobotReviewer is a tool for automatically assessing the risk of bias in randomized controlled trials, but there is limited evidence of its reliability. We evaluated the agreement between RobotReviewer and humans regarding the risk of bias assessment based on 1955 randomized controlled trials. The risk of bias in these trials was assessed via two…
Descriptors: Risk, Randomized Controlled Trials, Classification, Robotics
Li, Wei; Dong, Nianbo; Maynarad, Rebecca; Spybrook, Jessaca; Kelcey, Ben – Journal of Research on Educational Effectiveness, 2023
Cluster randomized trials (CRTs) are commonly used to evaluate educational interventions, particularly their effectiveness. Recently there has been greater emphasis on using these trials to explore cost-effectiveness. However, methods for establishing the power of cluster randomized cost-effectiveness trials (CRCETs) are limited. This study…
Descriptors: Research Design, Statistical Analysis, Randomized Controlled Trials, Cost Effectiveness
Yanping Pei; Adam Sales; Johann Gagnon-Bartsch – Grantee Submission, 2024
Randomized A/B tests within online learning platforms enable us to draw unbiased causal estimators. However, precise estimates of treatment effects can be challenging due to minimal participation, resulting in underpowered A/B tests. Recent advancements indicate that leveraging auxiliary information from detailed logs and employing design-based…
Descriptors: Randomized Controlled Trials, Learning Management Systems, Causal Models, Learning Analytics
Kristin Porter; Luke Miratrix; Kristen Hunter – Society for Research on Educational Effectiveness, 2021
Background: Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs)…
Descriptors: Statistical Analysis, Hypothesis Testing, Computer Software, Randomized Controlled Trials
McHugh, Douglas; Feinn, Richard; McIlvenna, Jeff; Trevithick, Matt – Education Sciences, 2021
Learner-centered coaching and feedback are relevant to various educational contexts. Spaced retrieval enhances long-term knowledge retention. We examined the efficacy of Blank Slate, a novel spaced retrieval software application, to promote learning and prevent forgetting, while gathering and analyzing data in the background about learners'…
Descriptors: Randomized Controlled Trials, Learning Analytics, Coaching (Performance), Formative Evaluation
Isaac M. Opper – Annenberg Institute for School Reform at Brown University, 2021
Researchers often include covariates when they analyze the results of randomized controlled trials (RCTs), valuing the increased precision of the estimates over the potential of inducing small-sample bias when doing so. In this paper, we develop a sufficient condition which ensures that the inclusion of covariates does not cause small-sample bias…
Descriptors: Randomized Controlled Trials, Sample Size, Statistical Bias, Artificial Intelligence