Publication Date
In 2025 | 2 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 20 |
Since 2016 (last 10 years) | 42 |
Since 2006 (last 20 years) | 66 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Location
China | 6 |
Australia | 3 |
Japan | 3 |
Singapore | 3 |
Connecticut | 2 |
Indonesia | 2 |
New Hampshire | 2 |
New York | 2 |
New York (New York) | 2 |
Rhode Island | 2 |
Taiwan | 2 |
More ▼ |
Laws, Policies, & Programs
Every Student Succeeds Act… | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
Jonas Flodén – British Educational Research Journal, 2025
This study compares how the generative AI (GenAI) large language model (LLM) ChatGPT performs in grading university exams compared to human teachers. Aspects investigated include consistency, large discrepancies and length of answer. Implications for higher education, including the role of teachers and ethics, are also discussed. Three…
Descriptors: College Faculty, Artificial Intelligence, Comparative Testing, Scoring
Rebecka Weegar; Peter Idestam-Almquist – International Journal of Artificial Intelligence in Education, 2024
Machine learning methods can be used to reduce the manual workload in exam grading, making it possible for teachers to spend more time on other tasks. However, when it comes to grading exams, fully eliminating manual work is not yet possible even with very accurate automated grading, as any grading mistakes could have significant consequences for…
Descriptors: Grading, Computer Assisted Testing, Introductory Courses, Computer Science Education
Pearson, Christopher; Penna, Nigel – Assessment & Evaluation in Higher Education, 2023
E-assessments are becoming increasingly common and progressively more complex. Consequently, how these longer, more complex questions are designed and marked is imperative. This article uses the NUMBAS e-assessment tool to investigate the best practice for creating longer questions and their mark schemes on surveying modules taken by engineering…
Descriptors: Automation, Scoring, Engineering Education, Foreign Countries
Dhini, Bachriah Fatwa; Girsang, Abba Suganda; Sufandi, Unggul Utan; Kurniawati, Heny – Asian Association of Open Universities Journal, 2023
Purpose: The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes essay scoring, which is conducted through two parameters, semantic and keyword similarities, using a SentenceTransformers pre-trained model that can construct the…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Essays
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Binglin Chen – ProQuest LLC, 2022
Assessment is a key component of education. Routine grading of students' work, however, is time consuming. Automating the grading process allows instructors to spend more of their time helping their students learn and engaging their students with more open-ended, creative activities. One way to automate grading is through computer-based…
Descriptors: College Students, STEM Education, Student Evaluation, Grading
Parker, Mark A. J.; Hedgeland, Holly; Jordan, Sally E.; Braithwaite, Nicholas St. J. – European Journal of Science and Mathematics Education, 2023
The study covers the development and testing of the alternative mechanics survey (AMS), a modified force concept inventory (FCI), which used automatically marked free-response questions. Data were collected over a period of three academic years from 611 participants who were taking physics classes at high school and university level. A total of…
Descriptors: Test Construction, Scientific Concepts, Physics, Test Reliability
Corcoran, Stephanie – Contemporary School Psychology, 2022
With the iPad-mediated cognitive assessment gaining popularity with school districts and the need for alternative modes for training and instruction during this COVID-19 pandemic, school psychology training programs will need to adapt to effectively train their students to be competent in administering, scoring, an interpreting cognitive…
Descriptors: School Psychologists, Professional Education, Job Skills, Cognitive Tests
Tsai, Cheng-Ting; Wu, Ja-Ling; Lin, Yu-Tzu; Yeh, Martin K.-C. – Educational Technology & Society, 2022
With the rapid increase of online learning and online degree programs, the need for a secure and fair scoring mechanisms in online learning becomes urgent. In this research, a secure scoring mechanism was designed and developed based on blockchain technology to build transparent and fair interactions among students and teachers. The proposed…
Descriptors: Electronic Learning, Online Courses, Computer Security, Scoring
Klein, Michael – ProQuest LLC, 2019
The purpose of the current study was to examine the differences between number and types of administration and scoring errors made by administration method (digital/Q-Interactive vs. paper-and-pencil) on the Wechsler Intelligence Scales for Children (WISC-V). WISC-V administration and scoring checklists were developed in order to provide an…
Descriptors: Intelligence Tests, Children, Test Format, Computer Assisted Testing
Çinar, Ayse; Ince, Elif; Gezer, Murat; Yilmaz, Özgür – Education and Information Technologies, 2020
Worldwide, open-ended questions that require short answers have been used in many exams in fields of science, such as the International Student Assessment Program (PISA), the International Science and Maths Trends Research (TIMSS). However, multiple-choice questions are used for many exams at the national level in Turkey, especially high school…
Descriptors: Foreign Countries, Computer Assisted Testing, Artificial Intelligence, Grading
Swapna Haresh Teckwani; Amanda Huee-Ping Wong; Nathasha Vihangi Luke; Ivan Cherh Chiet Low – Advances in Physiology Education, 2024
The advent of artificial intelligence (AI), particularly large language models (LLMs) like ChatGPT and Gemini, has significantly impacted the educational landscape, offering unique opportunities for learning and assessment. In the realm of written assessment grading, traditionally viewed as a laborious and subjective process, this study sought to…
Descriptors: Accuracy, Reliability, Computational Linguistics, Standards
Yuko Hayashi; Yusuke Kondo; Yutaka Ishii – Innovation in Language Learning and Teaching, 2024
Purpose: This study builds a new system for automatically assessing learners' speech elicited from an oral discourse completion task (DCT), and evaluates the prediction capability of the system with a view to better understanding factors deemed influential in predicting speaking proficiency scores and the pedagogical implications of the system.…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Japanese
Guerrero, Tricia A.; Wiley, Jennifer – Grantee Submission, 2019
Teachers may wish to use open-ended learning activities and tests, but they are burdensome to assess compared to forced-choice instruments. At the same time, forced-choice assessments suffer from issues of guessing (when used as tests) and may not encourage valuable behaviors of construction and generation of understanding (when used as learning…
Descriptors: Computer Assisted Testing, Student Evaluation, Introductory Courses, Psychology