Publication Date
In 2025 | 2 |
Since 2024 | 2 |
Descriptor
Artificial Intelligence | 2 |
Comparative Analysis | 2 |
Computer Software | 2 |
Evaluators | 2 |
Psychology | 2 |
Technology Integration | 2 |
Cues | 1 |
Difficulty Level | 1 |
Essays | 1 |
Formative Evaluation | 1 |
Grading | 1 |
More ▼ |
Source
Teaching of Psychology | 2 |
Author
Alexander Kah | 1 |
Chelsea M. Sims | 1 |
Chelsea R. Frazier | 1 |
Elizabeth L. Wetzler | 1 |
Emily Courtney | 1 |
Kenneth S. Cassidy | 1 |
Margaret J. Jones | 1 |
Mariah Wilkerson | 1 |
Michael Wood | 1 |
Nickalous A. Korbut | 1 |
Roger Young | 1 |
More ▼ |
Publication Type
Journal Articles | 2 |
Reports - Research | 2 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Elizabeth L. Wetzler; Kenneth S. Cassidy; Margaret J. Jones; Chelsea R. Frazier; Nickalous A. Korbut; Chelsea M. Sims; Shari S. Bowen; Michael Wood – Teaching of Psychology, 2025
Background: Generative artificial intelligence (AI) represents a potentially powerful, time-saving tool for grading student essays. However, little is known about how AI-generated essay scores compare to human instructor scores. Objective: The purpose of this study was to compare the essay grading scores produced by AI with those of human…
Descriptors: Essays, Writing Evaluation, Scores, Evaluators
Roger Young; Emily Courtney; Alexander Kah; Mariah Wilkerson; Yi-Hsin Chen – Teaching of Psychology, 2025
Background: Multiple-choice item (MCI) assessments are burdensome for instructors to develop. Artificial intelligence (AI, e.g., ChatGPT) can streamline the process without sacrificing quality. The quality of AI-generated MCIs and human experts is comparable. However, whether the quality of AI-generated MCIs is equally good across various domain-…
Descriptors: Item Response Theory, Multiple Choice Tests, Psychology, Textbooks