Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 4 |
| Since 2017 (last 10 years) | 5 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Computer Assisted Testing | 5 |
| Natural Language Processing | 5 |
| Test Validity | 5 |
| Automation | 3 |
| Scoring | 3 |
| Writing Evaluation | 3 |
| Efficiency | 2 |
| Multiple Choice Tests | 2 |
| Artificial Intelligence | 1 |
| College Entrance Examinations | 1 |
| Correlation | 1 |
| More ▼ | |
Source
| ETS Research Report Series | 1 |
| Education and Information… | 1 |
| Grantee Submission | 1 |
| IEEE Transactions on Learning… | 1 |
| Journal of Educational… | 1 |
Author
| Aldabe, Itziar | 1 |
| Andreea Dutulescu | 1 |
| Arruarte, Ana | 1 |
| Aryadoust, Vahid | 1 |
| Bejar, Isaac I. | 1 |
| Chen, Jing | 1 |
| Chen, Wenzhi | 1 |
| Danielle S. McNamara | 1 |
| Denis Iorga | 1 |
| Elorriaga, Jon A. | 1 |
| Huawei, Shi | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 5 |
| Journal Articles | 4 |
| Information Analyses | 1 |
| Speeches/Meeting Papers | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Higher Education | 2 |
| Postsecondary Education | 1 |
| Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2025
Automated multiple-choice question (MCQ) generation is valuable for scalable assessment and enhanced learning experiences. How-ever, existing MCQ generation methods face challenges in ensuring plausible distractors and maintaining answer consistency. This paper intro-duces a method for MCQ generation that integrates reasoning-based explanations…
Descriptors: Automation, Computer Assisted Testing, Multiple Choice Tests, Natural Language Processing
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Larranaga, Mikel; Aldabe, Itziar; Arruarte, Ana; Elorriaga, Jon A.; Maritxalar, Montse – IEEE Transactions on Learning Technologies, 2022
In a concept learning scenario, any technology-supported learning system must provide students with mechanisms that help them with the acquisition of the concepts to be learned. For the technology-supported learning systems to be successful in this task, the development of didactic material is crucial--a hard task that could be alleviated by means…
Descriptors: Computer Assisted Testing, Science Tests, Multiple Choice Tests, Textbooks
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Chen, Jing; Zhang, Mo; Bejar, Isaac I. – ETS Research Report Series, 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essay Tests

Peer reviewed
Direct link
