NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Henbest, Victoria S.; Apel, Kenn – Language, Speech, and Hearing Services in Schools, 2021
Purpose: As an initial step in determining whether a spelling error analysis might be useful in measuring children's linguistic knowledge, the relation between the frequency of types of scores from a spelling error analysis and children's performance on measures of phonological and orthographic pattern awareness was examined. Method: The spellings…
Descriptors: Elementary School Students, Grade 1, Spelling, Orthographic Symbols
Peer reviewed Peer reviewed
Direct linkDirect link
Amini, Mojtaba – Language Testing in Asia, 2018
Background: Translation quality assessment (TQA) suffers from subjectivity in both neighboring disciplines: 'TEFL' and 'Translation Studies, and more empirical studies are required to get closer to objectivity in this domain. The present study evaluated the quality of the written translation of TEFL students through three different approaches to…
Descriptors: Second Language Instruction, English (Second Language), Student Evaluation, Translation
Peer reviewed Peer reviewed
Direct linkDirect link
Jiao, Yishan; LaCross, Amy; Berisha, Visar; Liss, Julie – Journal of Speech, Language, and Hearing Research, 2019
Purpose: Subjective speech intelligibility assessment is often preferred over more objective approaches that rely on transcript scoring. This is, in part, because of the intensive manual labor associated with extracting objective metrics from transcribed speech. In this study, we propose an automated approach for scoring transcripts that provides…
Descriptors: Suprasegmentals, Phonemes, Error Patterns, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Ha, Minsu; Nehm, Ross H. – Journal of Science Education and Technology, 2016
Automated computerized scoring systems (ACSSs) are being increasingly used to analyze text in many educational settings. Nevertheless, the impact of misspelled words (MSW) on scoring accuracy remains to be investigated in many domains, particularly jargon-rich disciplines such as the life sciences. Empirical studies confirm that MSW are a…
Descriptors: Spelling, Case Studies, Computer Uses in Education, Test Scoring Machines
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bao, Xiaoli – English Language Teaching, 2015
Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…
Descriptors: Foreign Countries, Second Language Learning, English (Second Language), Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Unsworth, Nash; Engle, Randall W. – Journal of Memory and Language, 2006
Complex working memory span tasks have been shown to predict performance on a number of measures of higher-order cognition including fluid abilities. However, exactly why performance on these tasks is related to higher-order cognition is still not known. The present study examined the patterns of errors made on two common complex span tasks. The…
Descriptors: Scoring, Memory, Cues, Error Analysis (Language)