NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20255
Since 20249
Since 2021 (last 5 years)32
Since 2016 (last 10 years)64
Audience
Teachers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 64 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mo Zhang; Paul Deane; Andrew Hoang; Hongwen Guo; Chen Li – Educational Measurement: Issues and Practice, 2025
In this paper, we describe two empirical studies that demonstrate the application and modeling of keystroke logs in writing assessments. We illustrate two different approaches of modeling differences in writing processes: analysis of mean differences in handcrafted theory-driven features and use of large language models to identify stable personal…
Descriptors: Writing Tests, Computer Assisted Testing, Keyboarding (Data Entry), Writing Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Matthew D. Coss – Language Learning & Technology, 2025
The extent to which writing modality (i.e., hand-writing vs. keyboarding) impacts second-language (L2) writing assessment scores remains unclear. For alphabetic languages like English, research shows mixed results, documenting both equivalent and divergent scores between typed and handwritten tests (e.g., Barkaoui & Knouzi, 2018). However, for…
Descriptors: Computer Assisted Testing, Paper and Pencil Tests, Second Language Learning, Chinese
Peer reviewed Peer reviewed
Direct linkDirect link
Jessie S. Barrot – Education and Information Technologies, 2024
This bibliometric analysis attempts to map out the scientific literature on automated writing evaluation (AWE) systems for teaching, learning, and assessment. A total of 170 documents published between 2002 and 2021 in Social Sciences Citation Index journals were reviewed from four dimensions, namely size (productivity and citations), time…
Descriptors: Educational Trends, Automation, Computer Assisted Testing, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Yun Deok – Language Testing in Asia, 2022
A much-debated question in the L2 assessment field is if computer familiarity should be considered a potential source of construct-irrelevant variance in computer-based writing (CBW) tests. This study aims to make a partial validity argument for an online source-based writing test (OSWT) designed for English placement testing (EPT), focusing on…
Descriptors: Test Validity, Scores, Computer Assisted Testing, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Yue Huang; Joshua Wilson – Journal of Computer Assisted Learning, 2025
Background: Automated writing evaluation (AWE) systems, used as formative assessment tools in writing classrooms, are promising for enhancing instruction and improving student performance. Although meta-analytic evidence supports AWE's effectiveness in various contexts, research on its effectiveness in the U.S. K-12 setting has lagged behind its…
Descriptors: Writing Evaluation, Writing Skills, Writing Tests, Writing Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Dan Song; Alexander F. Tang – Language Learning & Technology, 2025
While many studies have addressed the benefits of technology-assisted L2 writing, limited research has delved into how generative artificial intelligence (GAI) supports students in completing their writing tasks in Mandarin Chinese. In this study, 26 university-level Mandarin Chinese foreign language students completed two writing tasks on two…
Descriptors: Artificial Intelligence, Second Language Learning, Standardized Tests, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mirjam de Vreeze-Westgeest; Sara Mata; Francisca Serrano; Wilma Resing; Bart Vogelaar – European Journal of Psychology and Educational Research, 2023
The current study aimed to investigate the effectiveness of an online dynamic test in reading and writing, differentiating in typically developing children (n = 47) and children diagnosed with dyslexia (n = 30) aged between nine and twelve years. In doing so, it was analysed whether visual working memory, auditory working memory, inhibition,…
Descriptors: Computer Assisted Testing, Reading Tests, Writing Tests, Executive Function
Peer reviewed Peer reviewed
Direct linkDirect link
Joshua Kloppers – International Journal of Computer-Assisted Language Learning and Teaching, 2023
Automated writing evaluation (AWE) software is an increasingly popular tool for English second language learners. However, research on the accuracy of such software has been both scarce and largely limited in its scope. As such, this article broadens the field of research on AWE accuracy by using a mixed design to holistically evaluate the…
Descriptors: Grammar, Automation, Writing Evaluation, Computer Assisted Instruction
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Anneen Church – Perspectives in Education, 2023
Restrictions and challenges brought on by the COVID-19 pandemic challenged higher education institutions to innovate to keep reaching teaching and learning goals. In South Africa, existing social inequalities were exacerbated by the pandemic restrictions and many students faced severe challenges in terms of access and support to aid in their…
Descriptors: Foreign Countries, Writing Tests, Student Evaluation, COVID-19
Peer reviewed Peer reviewed
Direct linkDirect link
Neha Biju; Nasser Said Gomaa Abdelrasheed; Khilola Bakiyeva; K. D. V. Prasad; Biruk Jember – Language Testing in Asia, 2024
In recent years, language practitioners have paid increasing attention to artificial intelligence (AI)'s role in language programs. This study investigated the impact of AI-assisted language assessment on L2 learners' foreign language anxiety (FLA), attitudes, motivation, and writing skills. The study adopted a sequential exploratory mixed-methods…
Descriptors: Artificial Intelligence, Computer Software, Computer Assisted Testing, Second Language Instruction
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5