Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 5 |
Descriptor
Computer Assisted Testing | 5 |
Evaluation Methods | 5 |
Natural Language Processing | 3 |
Foreign Countries | 2 |
Grading | 2 |
Scoring Rubrics | 2 |
Writing Evaluation | 2 |
Accuracy | 1 |
Adaptive Testing | 1 |
Automation | 1 |
College Faculty | 1 |
More ▼ |
Source
International Journal of… | 5 |
Author
Al-Emari, Salam | 1 |
Buckingham Shum, Simon | 1 |
Gite, Gaurav | 1 |
Knight, Simon | 1 |
Krivokapic, Alisa | 1 |
Kurdi, Ghader | 1 |
Leo, Jared | 1 |
Parsia, Bijan | 1 |
Pascual-Nieto, Ismael | 1 |
Passonneau, Rebecca J. | 1 |
Perez-Marin, Diana | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Research | 3 |
Information Analyses | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 2 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Schneider, Johannes; Richner, Robin; Riser, Micha – International Journal of Artificial Intelligence in Education, 2023
Autograding short textual answers has become much more feasible due to the rise of NLP and the increased availability of question-answer pairs brought about by a shift to online education. Autograding performance is still inferior to human grading. The statistical and black-box nature of state-of-the-art machine learning models makes them…
Descriptors: Grading, Natural Language Processing, Computer Assisted Testing, Ethics
Kurdi, Ghader; Leo, Jared; Parsia, Bijan; Sattler, Uli; Al-Emari, Salam – International Journal of Artificial Intelligence in Education, 2020
While exam-style questions are a fundamental educational tool serving a variety of purposes, manual construction of questions is a complex process that requires training, experience, and resources. This, in turn, hinders and slows down the use of educational activities (e.g. providing practice questions) and new advances (e.g. adaptive testing)…
Descriptors: Computer Assisted Testing, Adaptive Testing, Natural Language Processing, Questioning Techniques
Passonneau, Rebecca J.; Poddar, Ananya; Gite, Gaurav; Krivokapic, Alisa; Yang, Qian; Perin, Dolores – International Journal of Artificial Intelligence in Education, 2018
Development of reliable rubrics for educational intervention studies that address reading and writing skills is labor-intensive, and could benefit from an automated approach. We compare a main ideas rubric used in a successful writing intervention study to a highly reliable wise-crowd content assessment method developed to evaluate…
Descriptors: Computer Assisted Testing, Writing Evaluation, Content Analysis, Scoring Rubrics
Knight, Simon; Buckingham Shum, Simon; Ryan, Philippa; Sándor, Ágnes; Wang, Xiaolong – International Journal of Artificial Intelligence in Education, 2018
Research into the teaching and assessment of student writing shows that many students find academic writing a challenge to learn, with legal writing no exception. Improving the availability and quality of timely formative feedback is an important aim. However, the time-consuming nature of assessing writing makes it impractical for instructors to…
Descriptors: Writing Evaluation, Natural Language Processing, Legal Education (Professions), Undergraduate Students
Perez-Marin, Diana; Pascual-Nieto, Ismael – International Journal of Artificial Intelligence in Education, 2010
A student conceptual model can be defined as a set of interconnected concepts associated with an estimation value that indicates how well these concepts are used by the students. It can model just one student or a group of students, and can be represented as a concept map, conceptual diagram or one of several other knowledge representation…
Descriptors: Concept Mapping, Knowledge Representation, Models, Universities