Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 12 |
Since 2006 (last 20 years) | 14 |
Descriptor
Computer Assisted Testing | 14 |
Natural Language Processing | 14 |
Scoring | 10 |
Essays | 5 |
Feedback (Response) | 5 |
Semantics | 5 |
Automation | 4 |
Intelligent Tutoring Systems | 4 |
Student Evaluation | 4 |
Writing Evaluation | 4 |
Algorithms | 3 |
More ▼ |
Source
Grantee Submission | 14 |
Author
McNamara, Danielle S. | 5 |
Allen, Laura K. | 2 |
Crossley, Scott A. | 2 |
Danielle S. McNamara | 2 |
Dascalu, Mihai | 2 |
Allen, Laura | 1 |
Ashish Gurung | 1 |
Balyan, Renu | 1 |
Botarleanu, Robert-Mihai | 1 |
Cai, Zhiqiang | 1 |
Chan, Greta | 1 |
More ▼ |
Publication Type
Reports - Research | 13 |
Speeches/Meeting Papers | 8 |
Journal Articles | 2 |
Reports - Descriptive | 1 |
Education Level
Secondary Education | 3 |
High Schools | 2 |
Higher Education | 2 |
Postsecondary Education | 2 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Ying Fang; Rod D. Roscoe; Danielle S. McNamara – Grantee Submission, 2023
Artificial Intelligence (AI) based assessments are commonly used in a variety of settings including business, healthcare, policing, manufacturing, and education. In education, AI-based assessments undergird intelligent tutoring systems as well as many tools used to evaluate students and, in turn, guide learning and instruction. This chapter…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Wan, Qian; Crossley, Scott; Allen, Laura; McNamara, Danielle – Grantee Submission, 2020
In this paper, we extracted content-based and structure-based features of text to predict human annotations for claims and nonclaims in argumentative essays. We compared Logistic Regression, Bernoulli Naive Bayes, Gaussian Naive Bayes, Linear Support Vector Classification, Random Forest, and Neural Networks to train classification models. Random…
Descriptors: Persuasive Discourse, Essays, Writing Evaluation, Natural Language Processing
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2021
Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale.…
Descriptors: Computer Assisted Testing, Scoring, Natural Language Processing, Computer Software
Panaite, Marilena; Ruseti, Stefan; Dascalu, Mihai; Balyan, Renu; McNamara, Danielle S.; Trausan-Matu, Stefan – Grantee Submission, 2019
Intelligence Tutoring Systems (ITSs) focus on promoting knowledge acquisition, while providing relevant feedback during students' practice. Self-explanation practice is an effective method used to help students understand complex texts by leveraging comprehension. Our aim is to introduce a deep learning neural model for automatically scoring…
Descriptors: Computer Assisted Testing, Scoring, Intelligent Tutoring Systems, Natural Language Processing
Guerrero, Tricia A.; Wiley, Jennifer – Grantee Submission, 2019
Teachers may wish to use open-ended learning activities and tests, but they are burdensome to assess compared to forced-choice instruments. At the same time, forced-choice assessments suffer from issues of guessing (when used as tests) and may not encourage valuable behaviors of construction and generation of understanding (when used as learning…
Descriptors: Computer Assisted Testing, Student Evaluation, Introductory Courses, Psychology
Magliano, Joseph P.; Lampi, Jodi P.; Ray, Melissa; Chan, Greta – Grantee Submission, 2020
Coherent mental models for successful comprehension require inferences that establish semantic "bridges" between discourse constituents and "elaborations" that incorporate relevant background knowledge. While it is established that individual differences in the extent to which postsecondary students engage in these processes…
Descriptors: Reading Comprehension, Reading Strategies, Inferences, Reading Tests
Li, Haiying; Cai, Zhiqiang; Graesser, Arthur – Grantee Submission, 2018
In this study we developed and evaluated a crowdsourcing-based latent semantic analysis (LSA) approach to computerized summary scoring (CSS). LSA is a frequently used mathematical component in CSS, where LSA similarity represents the extent to which the to-be-graded target summary is similar to a model summary or a set of exemplar summaries.…
Descriptors: Computer Assisted Testing, Scoring, Semantics, Evaluation Methods
Lang, David; Stenhaug, Ben; Kizilcec, Rene – Grantee Submission, 2019
This research evaluates the psychometric properties of short-answer response items under a variety of grading rules in the context of a mobile learning platform in Africa. This work has three main findings. First, we introduce the concept of a differential device function (DDF), a type of differential item function that stems from the device a…
Descriptors: Foreign Countries, Psychometrics, Test Items, Test Format
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of writing proficiency generally includes analyses of the specific linguistic and rhetorical features contained in the singular essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing might more closely capture writing skill.…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Skills
Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2014
This study explores correlations between human ratings of essay quality and component scores based on similar natural language processing indices and weighted through a principal component analysis. The results demonstrate that such component scores show small to large effects with human ratings and thus may be suitable to providing both summative…
Descriptors: Essays, Computer Assisted Testing, Writing Evaluation, Scores
Roscoe, Rod D.; Varner, Laura K.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2013
Various computer tools have been developed to support educators' assessment of student writing, including automated essay scoring and automated writing evaluation systems. Research demonstrates that these systems exhibit relatively high scoring accuracy but uncertain instructional efficacy. Students' writing proficiency does not necessarily…
Descriptors: Writing Instruction, Intelligent Tutoring Systems, Computer Assisted Testing, Writing Evaluation