Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 6 |
| Since 2017 (last 10 years) | 12 |
| Since 2007 (last 20 years) | 16 |
Descriptor
| Computer Assisted Testing | 18 |
| Error Patterns | 18 |
| Scoring | 18 |
| Accuracy | 6 |
| Models | 5 |
| Computer Software | 4 |
| Correlation | 4 |
| Educational Technology | 4 |
| Essay Tests | 4 |
| Artificial Intelligence | 3 |
| Comparative Analysis | 3 |
| More ▼ | |
Source
Author
| Alex J. Mechaber | 1 |
| Allalouf, Avi | 1 |
| Almusharraf, Norah | 1 |
| Alonzo, Julie | 1 |
| Alotaibi, Hind | 1 |
| Apple, Kristen | 1 |
| Baumer, Michal | 1 |
| Belur, Vinetha | 1 |
| Berisha, Visar | 1 |
| Blanchard, Daniel | 1 |
| Brian E. Clauser | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 11 |
| Journal Articles | 10 |
| Dissertations/Theses -… | 3 |
| Collected Works - Proceedings | 2 |
| Books | 1 |
| Guides - Classroom - Teacher | 1 |
| Reports - Descriptive | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
Audience
| Practitioners | 1 |
| Teachers | 1 |
Location
| Indonesia | 1 |
| New York (New York) | 1 |
| Singapore | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Wechsler Intelligence Scale… | 2 |
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Peter Baldwin; Victoria Yaneva; Kai North; Le An Ha; Yiyun Zhou; Alex J. Mechaber; Brian E. Clauser – Journal of Educational Measurement, 2025
Recent developments in the use of large-language models have led to substantial improvements in the accuracy of content-based automated scoring of free-text responses. The reported accuracy levels suggest that automated systems could have widespread applicability in assessment. However, before they are used in operational testing, other aspects of…
Descriptors: Artificial Intelligence, Scoring, Computational Linguistics, Accuracy
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Corcoran, Stephanie – Contemporary School Psychology, 2022
With the iPad-mediated cognitive assessment gaining popularity with school districts and the need for alternative modes for training and instruction during this COVID-19 pandemic, school psychology training programs will need to adapt to effectively train their students to be competent in administering, scoring, an interpreting cognitive…
Descriptors: School Psychologists, Professional Education, Job Skills, Cognitive Tests
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Klein, Michael – ProQuest LLC, 2019
The purpose of the current study was to examine the differences between number and types of administration and scoring errors made by administration method (digital/Q-Interactive vs. paper-and-pencil) on the Wechsler Intelligence Scales for Children (WISC-V). WISC-V administration and scoring checklists were developed in order to provide an…
Descriptors: Intelligence Tests, Children, Test Format, Computer Assisted Testing
Reinertsen, Nathanael – English in Australia, 2018
The difference in how humans read and how Automated Essay Scoring (AES) systems process written language leads to a situation where a portion of student responses will be comprehensible to human markers, but unable to be parsed by AES systems. This paper examines a number of pieces of student writing that were marked by trained human markers, but…
Descriptors: Qualitative Research, Writing Evaluation, Essay Tests, Computer Assisted Testing
Allalouf, Avi; Gutentag, Tony; Baumer, Michal – Educational Measurement: Issues and Practice, 2017
Quality control (QC) in testing is paramount. QC procedures for tests can be divided into two types. The first type, one that has been well researched, is QC for tests administered to large population groups on few administration dates using a small set of test forms (e.g., large-scale assessment). The second type is QC for tests, usually…
Descriptors: Quality Control, Scoring, Computer Assisted Testing, Error Patterns
Dalton, Sarah Grace; Stark, Brielle C.; Fromm, Davida; Apple, Kristen; MacWhinney, Brian; Rensch, Amanda; Rowedder, Madyson – Journal of Speech, Language, and Hearing Research, 2022
Purpose: The aim of this study was to advance the use of structured, monologic discourse analysis by validating an automated scoring procedure for core lexicon (CoreLex) using transcripts. Method: Forty-nine transcripts from persons with aphasia and 48 transcripts from persons with no brain injury were retrieved from the AphasiaBank database. Five…
Descriptors: Validity, Discourse Analysis, Databases, Scoring
Jiao, Yishan; LaCross, Amy; Berisha, Visar; Liss, Julie – Journal of Speech, Language, and Hearing Research, 2019
Purpose: Subjective speech intelligibility assessment is often preferred over more objective approaches that rely on transcript scoring. This is, in part, because of the intensive manual labor associated with extracting objective metrics from transcribed speech. In this study, we propose an automated approach for scoring transcripts that provides…
Descriptors: Suprasegmentals, Phonemes, Error Patterns, Scoring
Nese, Joseph F. T.; Alonzo, Julie; Kamata, Akihito – Grantee Submission, 2016
The purpose of this study was to compare traditional oral reading fluency (ORF) measures to a computerized oral reading evaluation (CORE) system that uses speech recognition software. We applied a mixed model approach with two within-subject variables to test the mean WCPM score differences and the error rates between: passage length (25, 50, 85,…
Descriptors: Text Structure, Oral Reading, Reading Fluency, Reading Tests
Mao, Liyang; Liu, Ou Lydia; Roohr, Katrina; Belur, Vinetha; Mulholland, Matthew; Lee, Hee-Sun; Pallant, Amy – Educational Assessment, 2018
Scientific argumentation is one of the core practices for teachers to implement in science classrooms. We developed a computer-based formative assessment to support students' construction and revision of scientific arguments. The assessment is built upon automated scoring of students' arguments and provides feedback to students and teachers.…
Descriptors: Computer Assisted Testing, Science Tests, Scoring, Automation
Ha, Minsu; Nehm, Ross H. – Journal of Science Education and Technology, 2016
Automated computerized scoring systems (ACSSs) are being increasingly used to analyze text in many educational settings. Nevertheless, the impact of misspelled words (MSW) on scoring accuracy remains to be investigated in many domains, particularly jargon-rich disciplines such as the life sciences. Empirical studies confirm that MSW are a…
Descriptors: Spelling, Case Studies, Computer Uses in Education, Test Scoring Machines
Feng, Mingyu, Ed.; Käser, Tanja, Ed.; Talukdar, Partha, Ed. – International Educational Data Mining Society, 2023
The Indian Institute of Science is proud to host the fully in-person sixteenth iteration of the International Conference on Educational Data Mining (EDM) during July 11-14, 2023. EDM is the annual flagship conference of the International Educational Data Mining Society. The theme of this year's conference is "Educational data mining for…
Descriptors: Information Retrieval, Data Analysis, Computer Assisted Testing, Cheating
Blanchard, Daniel; Tetreault, Joel; Higgins, Derrick; Cahill, Aoife; Chodorow, Martin – ETS Research Report Series, 2013
This report presents work on the development of a new corpus of non-native English writing. It will be useful for the task of native language identification, as well as grammatical error detection and correction, and automatic essay scoring. In this report, the corpus is described in detail.
Descriptors: Language Tests, Second Language Learning, English (Second Language), Writing Tests
Yoon, Su-Youn – ProQuest LLC, 2009
This dissertation provides an automated scoring method of speech fluency for second language learners of English (L2 learners) based that uses speech recognition technology. Non-standard pronunciation, frequent disfluencies, faulty grammar, and inappropriate lexical choices are crucial characteristics of L2 learners' speech. Due to the ease of…
Descriptors: Phonemes, Second Language Learning, Scoring, Correlation
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
