NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 78 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Regan Mozer; Luke Miratrix – Grantee Submission, 2024
For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by trained human raters. This process, the current standard, is both time-consuming and limiting: even the largest human coding efforts are typically constrained to…
Descriptors: Artificial Intelligence, Coding, Efficiency, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Rebeckah K. Fussell; Emily M. Stump; N. G. Holmes – Physical Review Physics Education Research, 2024
Physics education researchers are interested in using the tools of machine learning and natural language processing to make quantitative claims from natural language and text data, such as open-ended responses to survey questions. The aspiration is that this form of machine coding may be more efficient and consistent than human coding, allowing…
Descriptors: Physics, Educational Researchers, Artificial Intelligence, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Fromm, Davida; Katta, Saketh; Paccione, Mason; Hecht, Sophia; Greenhouse, Joel; MacWhinney, Brian; Schnur, Tatiana T. – Journal of Speech, Language, and Hearing Research, 2021
Purpose: Analysis of connected speech in the field of adult neurogenic communication disorders is essential for research and clinical purposes, yet time and expertise are often cited as limiting factors. The purpose of this project was to create and evaluate an automated program to score and compute the measures from the Quantitative Production…
Descriptors: Speech, Automation, Statistical Analysis, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Bashir, Rabia; Dunn, Adam G.; Surian, Didi – Research Synthesis Methods, 2021
Few data-driven approaches are available to estimate the risk of conclusion change in systematic review updates. We developed a rule-based approach to automatically extract information from reviews and updates to be used as features for modelling conclusion change risk. Rules were developed to extract relevant information from published Cochrane…
Descriptors: Literature Reviews, Data, Automation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Nuijten, Michèle B.; Polanin, Joshua R. – Research Synthesis Methods, 2020
We present the R package and web app "statcheck" to automatically detect statistical reporting inconsistencies in primary studies and meta-analyses. Previous research has shown a high prevalence of reported p-values that are inconsistent--meaning a re-calculated p-value, based on the reported test statistic and degrees of freedom, does…
Descriptors: Meta Analysis, Statistical Analysis, Reliability, Replication (Evaluation)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ryan Schwarz; H. Cigdem Bulut; Charles Anifowose – International Journal of Assessment Tools in Education, 2023
The increasing volume of large-scale assessment data poses a challenge for testing organizations to manage data and conduct psychometric analysis efficiently. Traditional psychometric software presents barriers, such as a lack of functionality for managing data and conducting various standard psychometric analyses efficiently. These challenges…
Descriptors: Educational Assessment, International Assessment, Psychometrics, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
El Sherif, Reem; Langlois, Alexis; Pandu, Xiao; Nie, Jian-Yun; Thomas, James; Hong, Quan Nha; Pluye, Pierre – Education for Information, 2020
Mixed studies reviews include empirical studies with diverse designs (qualitative, quantitative and mixed methods). To make the process of identifying relevant empirical studies for such reviews more efficient, we developed a mixed filter that included different keywords and subject headings for quantitative (e.g., cohort study), qualitative…
Descriptors: Automation, Classification, Qualitative Research, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Woodard, Victoria; Lee, Hollylynne – Journal of Statistics and Data Science Education, 2021
As the demand for skilled data scientists has grown, university level statistics and data science courses have become more rigorous in training students to understand and utilize the tools that their future careers will likely require. However, the mechanisms to assess students' use of these tools while they are learning to use them are not well…
Descriptors: College Students, Statistics Education, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Mousavi, Amin; Schmidt, Matthew; Squires, Vicki; Wilson, Ken – International Journal of Artificial Intelligence in Education, 2021
Greer and Mark's (2016) paper suggested and reviewed different methods for evaluating the effectiveness of intelligent tutoring systems such as Propensity score matching. The current study aimed at assessing the effectiveness of automated personalized feedback intervention implemented via the Student Advice Recommender Agent (SARA) in a first-year…
Descriptors: Automation, Feedback (Response), Intervention, College Freshmen
Peer reviewed Peer reviewed
Direct linkDirect link
Günhan, Burak Kürsad; Friede, Tim; Held, Leonhard – Research Synthesis Methods, 2018
Network meta-analysis (NMA) is gaining popularity for comparing multiple treatments in a single analysis. Generalized linear mixed models provide a unifying framework for NMA, allow us to analyze datasets with dichotomous, continuous or count endpoints, and take into account multiarm trials, potential heterogeneity between trials and network…
Descriptors: Meta Analysis, Regression (Statistics), Statistical Inference, Probability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Doneva, Rositsa; Gaftandzhieva, Siliva; Totkov, George – Turkish Online Journal of Distance Education, 2018
This paper presents a study on known approaches for quality assurance of educational test and test items. On its basis a comprehensive approach to the quality assurance of online educational testing is proposed to address the needs of all stakeholders (authors of online tests, teachers, students, experts, quality managers, etc.). According to the…
Descriptors: Educational Testing, Automation, Quality Assurance, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hao, Jiangang; Liu, Lei; Kyllonen, Patrick; Flor, Michael; von Davier, Alina A. – ETS Research Report Series, 2019
Collaborative problem solving (CPS) is an important 21st-century skill that is crucial for both career and academic success. However, developing a large-scale and standardized assessment of CPS that can be administered on a regular basis is very challenging. In this report, we introduce a set of psychometric considerations and a general scoring…
Descriptors: Scoring, Psychometrics, Cooperation, Problem Solving
Peer reviewed Peer reviewed
Direct linkDirect link
Bonet, Nicolás; Garcés, Kelly; Casallas, Rubby; Correal, María Elsa; Wei, Ran – Computer Science Education, 2018
Bad smells affect maintainability and performance of model-to-model transformations. There are studies that define a set of transformation bad smells, and some of them propose techniques to recognize and--according to their complexity--fix them in a (semi)automated way. In academia it is necessary to make students aware of this subject and provide…
Descriptors: Foreign Countries, Graduate Students, Masters Programs, Programming
Peer reviewed Peer reviewed
Direct linkDirect link
Kieftenbeld, Vincent; Boyer, Michelle – Applied Measurement in Education, 2017
Automated scoring systems are typically evaluated by comparing the performance of a single automated rater item-by-item to human raters. This presents a challenge when the performance of multiple raters needs to be compared across multiple items. Rankings could depend on specifics of the ranking procedure; observed differences could be due to…
Descriptors: Automation, Scoring, Comparative Analysis, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Cohen, Yoav; Levi, Effi; Ben-Simon, Anat – Applied Measurement in Education, 2018
In the current study, two pools of 250 essays, all written as a response to the same prompt, were rated by two groups of raters (14 or 15 raters per group), thereby providing an approximation to the essay's true score. An automated essay scoring (AES) system was trained on the datasets and then scored the essays using a cross-validation scheme. By…
Descriptors: Test Validity, Automation, Scoring, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6