NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Grantee Submission84
Audience
Teachers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 84 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Donnette Narine; Takashi Yamashita; Runcie C. W. Chidebe; Phyllis A. Cummins; Jenna W. Kramer; Rita Karam – Grantee Submission, 2024
Job automation can undermine economic security for workers in general, and older workers, in particular. In this respect, consistently updating one's knowledge and skills is essential for being competitive in a technology-driven labor market. Older workers with lower adult literacy skills experience difficulties with continuous education and…
Descriptors: Literacy, Automation, Careers, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Peer reviewed Peer reviewed
Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Peer reviewed Peer reviewed
Zirong Chen; Ziyan An; Jennifer Reynolds; Kristin Mullen; Stephen Martini; Meiyi Ma – Grantee Submission, 2025
Emergency response services are critical to public safety, with 9-1-1 call-takers playing a key role in ensuring timely and effective emergency operations. To ensure call-taking performance consistency, quality assurance is implemented to evaluate and refine call-takers' skillsets. However, traditional human-led evaluations struggle with high call…
Descriptors: Emergency Programs, Automation, Artificial Intelligence, Safety
Peer reviewed Peer reviewed
Direct linkDirect link
Regan Mozer; Luke Miratrix – Grantee Submission, 2024
For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by trained human raters. This process, the current standard, is both time-consuming and limiting: even the largest human coding efforts are typically constrained to…
Descriptors: Artificial Intelligence, Coding, Efficiency, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Jessica Andrews-Todd; Jonathan Steinberg; Michael Flor; Carolyn M. Forsyth – Grantee Submission, 2022
Competency in skills associated with collaborative problem solving (CPS) is critical for many contexts, including school, the workplace, and the military. Innovative approaches for assessing individuals' CPS competency are necessary, as traditional assessment types such as multiple-choice items are not well suited for such a process-oriented…
Descriptors: Automation, Classification, Cooperative Learning, Problem Solving
Philip I. Pavlik; Luke G. Eglington – Grantee Submission, 2023
This paper presents a tool for creating student models in logistic regression. Creating student models has typically been done by expert selection of the appropriate terms, beginning with models as simple as IRT or AFM but more recently with highly complex models like BestLR. While alternative methods exist to select the appropriate predictors for…
Descriptors: Students, Models, Regression (Statistics), Alternative Assessment
Robert-Mihai Botarleanu; Mihai Dascalu; Scott Andrew Crossley; Danielle S. McNamara – Grantee Submission, 2022
The ability to express yourself concisely and coherently is a crucial skill, both for academic purposes and professional careers. An important aspect to consider in writing is an adequate segmentation of ideas, which in turn requires a proper understanding of where to place paragraph breaks. However, these decisions are often performed…
Descriptors: Paragraph Composition, Text Structure, Automation, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Aaron Haim; Eamon Worden; Neil T. Heffernan – Grantee Submission, 2024
Since GPT-4's release it has shown novel abilities in a variety of domains. This paper explores the use of LLM-generated explanations as on-demand assistance for problems within the ASSISTments platform. In particular, we are studying whether GPT-generated explanations are better than nothing on problems that have no supports and whether…
Descriptors: Artificial Intelligence, Learning Management Systems, Computer Software, Intelligent Tutoring Systems
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Donnette Narine; Takashi Yamashita; Runcie C. W. Chidebe; Phyllis A. Cummins; Jenna W. Kramer; Rita Karam – Grantee Submission, 2023
Job automation is a topical issue in a technology-driven labor market. However, greater amounts of human capital (e.g., often measured by education, and information-processing skills, including adult literacy) are linked with job security. A knowledgeable and skilled labor force better resists unemployment and/or rebounds from job disruption…
Descriptors: Human Capital, Automation, Job Security, Labor Force Development
Danielle S. McNamara; Panayiota Kendeou – Grantee Submission, 2022
We propose a framework designed to guide the development of automated writing practice and formative evaluation and feedback for young children (K-5 th grade) -- the early Automated Writing Evaluation (early-AWE) Framework. e-AWE is grounded on the fundamental assumption that e-AWE is needed for young developing readers, but must incorporate…
Descriptors: Writing Evaluation, Automation, Formative Evaluation, Feedback (Response)
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6