Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 11 |
Since 2006 (last 20 years) | 20 |
Descriptor
Automation | 26 |
Scoring | 26 |
Computer Assisted Testing | 10 |
Essays | 10 |
Artificial Intelligence | 6 |
Scores | 5 |
Testing | 5 |
Validity | 5 |
Writing Evaluation | 5 |
Writing Tests | 5 |
Cheating | 4 |
More ▼ |
Source
Author
Publication Type
Reports - Evaluative | 26 |
Journal Articles | 18 |
Information Analyses | 3 |
Numerical/Quantitative Data | 3 |
Education Level
Higher Education | 5 |
Postsecondary Education | 5 |
Elementary Secondary Education | 3 |
Middle Schools | 3 |
Early Childhood Education | 2 |
Elementary Education | 2 |
Grade 10 | 2 |
Grade 11 | 2 |
Grade 3 | 2 |
Grade 4 | 2 |
Grade 5 | 2 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 2 |
National Assessment of… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Ferrara, Steve; Qunbar, Saed – Journal of Educational Measurement, 2022
In this article, we argue that automated scoring engines should be transparent and construct relevant--that is, as much as is currently feasible. Many current automated scoring engines cannot achieve high degrees of scoring accuracy without allowing in some features that may not be easily explained and understood and may not be obviously and…
Descriptors: Artificial Intelligence, Scoring, Essays, Automation
Beseiso, Majdi; Alzubi, Omar A.; Rashaideh, Hasan – Journal of Computing in Higher Education, 2021
E-learning is gradually gaining prominence in higher education, with universities enlarging provision and more students getting enrolled. The effectiveness of automated essay scoring (AES) is thus holding a strong appeal to universities for managing an increasing learning interest and reducing costs associated with human raters. The growth in…
Descriptors: Automation, Scoring, Essays, Writing Tests
Gardner, John; O'Leary, Michael; Yuan, Li – Journal of Computer Assisted Learning, 2021
Artificial Intelligence is at the heart of modern society with computers now capable of making process decisions in many spheres of human activity. In education, there has been intensive growth in systems that make formal and informal learning an anytime, anywhere activity for billions of people through online open educational resources and…
Descriptors: Artificial Intelligence, Educational Assessment, Formative Evaluation, Summative Evaluation
Richardson, Mary; Clesham, Rose – London Review of Education, 2021
Our world has been transformed by technologies incorporating artificial intelligence (AI) within mass communication, employment, entertainment and many other aspects of our daily lives. However, within the domain of education, it seems that our ways of working and, particularly, assessing have hardly changed at all. We continue to prize…
Descriptors: Artificial Intelligence, High Stakes Tests, Computer Assisted Testing, Educational Change
Moncaleano, Sebastian; Russell, Michael – Journal of Applied Testing Technology, 2018
2017 marked a century since the development and administration of the first large-scale group administered standardized test. Since that time, both the importance of testing and the technology of testing have advanced significantly. This paper traces the technological advances that have led to the large-scale administration of educational tests in…
Descriptors: Technological Advancement, Standardized Tests, Computer Assisted Testing, Automation
Shermis, Mark D. – Applied Measurement in Education, 2018
This article employs the Common European Framework Reference for Language Acquisition (CEFR) as a basis for evaluating writing in the context of machine scoring. The CEFR was designed as a framework for evaluating proficiency levels of speaking for the 49 languages comprising the European Union. The intent was to impact language instruction so…
Descriptors: Scoring, Automation, Essays, Language Proficiency
Higgins, Derrick; Heilman, Michael – Educational Measurement: Issues and Practice, 2014
As methods for automated scoring of constructed-response items become more widely adopted in state assessments, and are used in more consequential operational configurations, it is critical that their susceptibility to gaming behavior be investigated and managed. This article provides a review of research relevant to how construct-irrelevant…
Descriptors: Automation, Scoring, Responses, Test Wiseness
Beigman Klebanov, Beata; Burstein, Jill; Harackiewicz, Judith M.; Priniski, Stacy J.; Mulholland, Matthew – International Journal of Artificial Intelligence in Education, 2017
The integration of subject matter learning with reading and writing skills takes place in multiple ways. Students learn to read, interpret, and write texts in the discipline-relevant genres. However, writing can be used not only for the purposes of practice in professional communication, but also as an opportunity to reflect on the learned…
Descriptors: STEM Education, Content Area Writing, Writing Instruction, Intervention
Behizadeh, Nadia; Lynch, Tom Liam – Berkeley Review of Education, 2017
For the last century, the quality of large-scale assessment in the United States has been undermined by narrow educational theory and hindered by limitations in technology. As a result, poor assessment practices have encouraged low-level instructional practices that disparately affect students from the most disadvantaged communities and schools.…
Descriptors: Equal Education, Measurement, Educational Theories, Evaluation Methods
Reilly, Erin Dawna; Stafford, Rose Eleanore; Williams, Kyle Marie; Corliss, Stephanie Brooks – International Review of Research in Open and Distance Learning, 2014
The use of massive open online courses (MOOCs) to expand students' access to higher education has raised questions regarding the extent to which this course model can provide and assess authentic, higher level student learning. In response to this need, MOOC platforms have begun utilizing automated essay scoring (AES) systems that allow…
Descriptors: Online Courses, Essays, Scoring, Automation
Balfour, Stephen P. – Research & Practice in Assessment, 2013
Two of the largest Massive Open Online Course (MOOC) organizations have chosen different methods for the way they will score and provide feedback on essays students submit. EdX, MIT and Harvard's non-profit MOOC federation, recently announced that they will use a machine-based Automated Essay Scoring (AES) application to assess written work in…
Descriptors: Online Courses, Writing Evaluation, Automation, Scoring
Park, Kwanghyun – Language Assessment Quarterly, 2014
This article outlines the current state of and recent developments in the use of corpora for language assessment and considers future directions with a special focus on computational methodology. Because corpora began to make inroads into language assessment in the 1990s, test developers have increasingly used them as a reference resource to…
Descriptors: Language Tests, Computational Linguistics, Natural Language Processing, Scoring
Gorin, Joanna S.; O'Reilly, Tenaha; Sabatini, John; Song, Yi; Deane, Paul – Grantee Submission, 2014
Recent advances in cognitive science and psychometrics have expanded the possibilities for the next generation of literacy assessment as an integrated domain (Bennett, 2011a; Deane, Sabatini, & O'Reilly, 2011; Leighton & Gierl, 2011; Sabatini, Albro, & O'Reilly, 2012). In this paper, we discuss four key areas supporting innovations in…
Descriptors: Literacy Education, Evaluation Methods, Measurement Techniques, Student Evaluation
Attali, Yigal; Lewis, Will; Steier, Michael – Language Testing, 2013
Automated essay scoring can produce reliable scores that are highly correlated with human scores, but is limited in its evaluation of content and other higher-order aspects of writing. The increased use of automated essay scoring in high-stakes testing underscores the need for human scoring that is focused on higher-order aspects of writing. This…
Descriptors: Scoring, Essay Tests, Reliability, High Stakes Tests
Partnership for Assessment of Readiness for College and Careers, 2019
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium designed to create next-generation assessments that, compared to traditional K-12 assessments, more accurately measure student progress toward college and career readiness. The PARCC assessments are aligned to the Common Core State Standards…
Descriptors: College Readiness, Career Readiness, Common Core State Standards, Language Arts
Previous Page | Next Page »
Pages: 1 | 2