Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 21 |
Descriptor
Automation | 21 |
Natural Language Processing | 21 |
Semantics | 21 |
Artificial Intelligence | 8 |
Computer Assisted Testing | 8 |
Scoring | 7 |
Student Evaluation | 7 |
Syntax | 6 |
Test Items | 6 |
Accuracy | 5 |
Classification | 5 |
More ▼ |
Source
Author
Iseli, Markus R. | 2 |
Kerr, Deirdre | 2 |
McNamara, Danielle S. | 2 |
Mousavi, Hamid | 2 |
Abhishek Chugh | 1 |
Alqady, Mohammed | 1 |
Andaloussi, Amine Abbab | 1 |
Anique de Bruin | 1 |
Aslam, Muhammad | 1 |
Badia, Toni | 1 |
Burattin, Andrea | 1 |
More ▼ |
Publication Type
Reports - Research | 14 |
Journal Articles | 11 |
Reports - Evaluative | 3 |
Speeches/Meeting Papers | 3 |
Collected Works - Proceedings | 2 |
Dissertations/Theses -… | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Middle Schools | 3 |
Elementary Education | 2 |
Junior High Schools | 2 |
Secondary Education | 2 |
Elementary Secondary Education | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Grade 8 | 1 |
High Schools | 1 |
More ▼ |
Audience
Location
Brazil | 2 |
Denmark | 2 |
Netherlands | 2 |
Asia | 1 |
Australia | 1 |
Connecticut | 1 |
Egypt | 1 |
Estonia | 1 |
Florida | 1 |
Germany | 1 |
Greece | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Remote Associates Test | 1 |
What Works Clearinghouse Rating
Leveraging Large Language Models to Generate Course-Specific Semantically Annotated Learning Objects
Dominic Lohr; Marc Berges; Abhishek Chugh; Michael Kohlhase; Dennis Müller – Journal of Computer Assisted Learning, 2025
Background: Over the past few decades, the process and methodology of automatic question generation (AQG) have undergone significant transformations. Recent progress in generative natural language models has opened up new potential in the generation of educational content. Objectives: This paper explores the potential of large language models…
Descriptors: Resource Units, Semantics, Automation, Questioning Techniques
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
C. H., Dhawaleswar Rao; Saha, Sujan Kumar – IEEE Transactions on Learning Technologies, 2023
Multiple-choice question (MCQ) plays a significant role in educational assessment. Automatic MCQ generation has been an active research area for years, and many systems have been developed for MCQ generation. Still, we could not find any system that generates accurate MCQs from school-level textbook contents that are useful in real examinations.…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Automation, Test Items
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Héctor J. Pijeira-Díaz; Shashank Subramanya; Janneke van de Pol; Anique de Bruin – Journal of Computer Assisted Learning, 2024
Background: When learning causal relations, completing causal diagrams enhances students' comprehension judgements to some extent. To potentially boost this effect, advances in natural language processing (NLP) enable real-time formative feedback based on the automated assessment of students' diagrams, which can involve the correctness of both the…
Descriptors: Learning Analytics, Automation, Student Evaluation, Causal Models
Nicula, Bogdan; Perret, Cecile A.; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2020
Theories of discourse argue that comprehension depends on the coherence of the learner's mental representation. Our aim is to create a reliable automated representation to estimate readers' level of comprehension based on different productions, namely self-explanations and answers to open-ended questions. Previous work relied on Cohesion Network…
Descriptors: Network Analysis, Reading Comprehension, Automation, Artificial Intelligence
Lu, Chang; Cutumisu, Maria – International Educational Data Mining Society, 2021
Digitalization and automation of test administration, score reporting, and feedback provision have the potential to benefit large-scale and formative assessments. Many studies on automated essay scoring (AES) and feedback generation systems were published in the last decade, but few connected AES and feedback generation within a unified framework.…
Descriptors: Learning Processes, Automation, Computer Assisted Testing, Scoring
Sanchez-Ferreres, Josep; Delicado, Luis; Andaloussi, Amine Abbab; Burattin, Andrea; Calderon-Ruiz, Guillermo; Weber, Barbara; Carmona, Josep; Padro, Lluis – IEEE Transactions on Learning Technologies, 2020
The creation of a process model is primarily a formalization task that faces the challenge of constructing a syntactically correct entity, which accurately reflects the semantics of reality, and is understandable to the model reader. This article proposes a framework called "Model Judge," focused toward the two main actors in the process…
Descriptors: Models, Automation, Validity, Natural Language Processing
Danielle S. McNamara; Laura K. Allen; Scott A. Crossley; Mihai Dascalu; Cecile A. Perret – Grantee Submission, 2017
Language is of central importance to the field of education because it is a conduit for communicating and understanding information. Therefore, researchers in the field of learning analytics can benefit from methods developed to analyze language both accurately and efficiently. Natural language processing (NLP) techniques can provide such an…
Descriptors: Natural Language Processing, Learning Analytics, Educational Technology, Automation
Papasalouros, Andreas; Chatzigiannakou, Maria – International Association for Development of the Information Society, 2018
Automating the production of questions for assessment and self-assessment has become recently an active field of study. The use of Semantic Web technologies has certain advantages over other methods for question generation and thus is one of the most important lines of research for this problem. The aim of this paper is to provide an overview of…
Descriptors: Computer Assisted Testing, Web 2.0 Technologies, Test Format, Multiple Choice Tests
Klein, Ariel; Badia, Toni – Journal of Creative Behavior, 2015
In this study we show how complex creative relations can arise from fairly frequent semantic relations observed in everyday language. By doing this, we reflect on some key cognitive aspects of linguistic and general creativity. In our experimentation, we automated the process of solving a battery of Remote Associates Test tasks. By applying…
Descriptors: Language Usage, Semantics, Natural Language Processing, Test Items
Liu, Ming; Rus, Vasile; Liu, Li – IEEE Transactions on Learning Technologies, 2017
Question generation is an emerging research area of artificial intelligence in education. Question authoring tools are important in educational technologies, e.g., intelligent tutoring systems, as well as in dialogue systems. Approaches to generate factual questions, i.e., questions that have concrete answers, mainly make use of the syntactical…
Descriptors: Chinese, Questioning Techniques, Automation, Natural Language Processing
Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R. – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2013
The Common Core assessments emphasize short essay constructed-response items over multiple-choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way to score them automatically can be found. Current automatic essay-scoring techniques…
Descriptors: Scoring, Automation, Essay Tests, Natural Language Processing
Kerr, Deirdre; Mousavi, Hamid; Iseli, Markus R. – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2013
The Common Core assessments emphasize short essay constructed response items over multiple choice items because they are more precise measures of understanding. However, such items are too costly and time consuming to be used in national assessments unless a way is found to score them automatically. Current automatic essay scoring techniques are…
Descriptors: Automation, Scoring, Essay Tests, Natural Language Processing
Malik, Kaleem Razzaq; Mir, Rizwan Riaz; Farhan, Muhammad; Rafiq, Tariq; Aslam, Muhammad – EURASIA Journal of Mathematics, Science & Technology Education, 2017
Research in era of data representation to contribute and improve key data policy involving the assessment of learning, training and English language competency. Students are required to communicate in English with high level impact using language and influence. The electronic technology works to assess students' questions positively enabling…
Descriptors: Knowledge Management, Computer Assisted Testing, Student Evaluation, Search Strategies
Previous Page | Next Page »
Pages: 1 | 2