Publication Date
In 2025 | 6 |
Since 2024 | 10 |
Since 2021 (last 5 years) | 33 |
Since 2016 (last 10 years) | 65 |
Since 2006 (last 20 years) | 117 |
Descriptor
Computer Assisted Testing | 134 |
Writing Tests | 134 |
Writing Evaluation | 58 |
Scores | 50 |
English (Second Language) | 48 |
Second Language Learning | 47 |
Scoring | 42 |
Language Tests | 39 |
Essays | 34 |
Foreign Countries | 32 |
Correlation | 31 |
More ▼ |
Source
Author
Attali, Yigal | 6 |
Lee, Yong-Won | 6 |
Breland, Hunter | 3 |
Deane, Paul | 3 |
Lazarus, Sheryl S. | 3 |
Sinharay, Sandip | 3 |
Sterett H. Mercer | 3 |
Thurlow, Martha L. | 3 |
Zhang, Mo | 3 |
Allen, Nancy | 2 |
Aydin, Selami | 2 |
More ▼ |
Publication Type
Education Level
Audience
Teachers | 2 |
Administrators | 1 |
Practitioners | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Mo Zhang; Paul Deane; Andrew Hoang; Hongwen Guo; Chen Li – Educational Measurement: Issues and Practice, 2025
In this paper, we describe two empirical studies that demonstrate the application and modeling of keystroke logs in writing assessments. We illustrate two different approaches of modeling differences in writing processes: analysis of mean differences in handcrafted theory-driven features and use of large language models to identify stable personal…
Descriptors: Writing Tests, Computer Assisted Testing, Keyboarding (Data Entry), Writing Processes
Matthew D. Coss – Language Learning & Technology, 2025
The extent to which writing modality (i.e., hand-writing vs. keyboarding) impacts second-language (L2) writing assessment scores remains unclear. For alphabetic languages like English, research shows mixed results, documenting both equivalent and divergent scores between typed and handwritten tests (e.g., Barkaoui & Knouzi, 2018). However, for…
Descriptors: Computer Assisted Testing, Paper and Pencil Tests, Second Language Learning, Chinese
Jessie S. Barrot – Education and Information Technologies, 2024
This bibliometric analysis attempts to map out the scientific literature on automated writing evaluation (AWE) systems for teaching, learning, and assessment. A total of 170 documents published between 2002 and 2021 in Social Sciences Citation Index journals were reviewed from four dimensions, namely size (productivity and citations), time…
Descriptors: Educational Trends, Automation, Computer Assisted Testing, Writing Tests
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Rogers, Christopher M.; Ressa, Virginia A.; Thurlow, Martha L.; Lazarus, Sheryl S. – National Center on Educational Outcomes, 2022
This report provides an update on the state of the research on testing accommodations. Previous reports by the National Center on Educational Outcomes (NCEO) have covered research published since 1999. In this report, we summarize the research published in 2020. During 2020, 11 research studies addressed testing accommodations in the U.S. K-12…
Descriptors: Elementary Secondary Education, Testing Accommodations, Students with Disabilities, Computer Assisted Testing
Jussi S. Jauhiainen; Agustín Garagorry Guerra – Innovations in Education and Teaching International, 2025
The study highlights ChatGPT-4's potential in educational settings for the evaluation of university students' open-ended written examination responses. ChatGPT-4 evaluated 54 written responses, ranging from 24 to 256 words in English. It assessed each response using five criteria and assigned a grade on a six-point scale from fail to excellent,…
Descriptors: Artificial Intelligence, Technology Uses in Education, Student Evaluation, Writing Evaluation
Albus, Deb A.; Lazarus, Sheryl S.; Thurlow, Martha L.; Larson, Erik D.; Liu, Kristin K. – National Center on Educational Outcomes, 2020
Text-to-speech (TTS) refers to technology that reads aloud digital text (Understood, 2019). For years, TTS and its human counterpart--read aloud--have generated controversy about when and for whom these supports should be allowed on state assessments (Thurlow, Christensen, & Rogers, n.d.; Thurlow & Weiner, n.d.; Thurlow, Laitusis, Dillon,…
Descriptors: Testing Accommodations, Assistive Technology, State Policy, Reading Tests
Choi, Yun Deok – Language Testing in Asia, 2022
A much-debated question in the L2 assessment field is if computer familiarity should be considered a potential source of construct-irrelevant variance in computer-based writing (CBW) tests. This study aims to make a partial validity argument for an online source-based writing test (OSWT) designed for English placement testing (EPT), focusing on…
Descriptors: Test Validity, Scores, Computer Assisted Testing, English (Second Language)
Yue Huang; Joshua Wilson – Journal of Computer Assisted Learning, 2025
Background: Automated writing evaluation (AWE) systems, used as formative assessment tools in writing classrooms, are promising for enhancing instruction and improving student performance. Although meta-analytic evidence supports AWE's effectiveness in various contexts, research on its effectiveness in the U.S. K-12 setting has lagged behind its…
Descriptors: Writing Evaluation, Writing Skills, Writing Tests, Writing Instruction
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Dan Song; Alexander F. Tang – Language Learning & Technology, 2025
While many studies have addressed the benefits of technology-assisted L2 writing, limited research has delved into how generative artificial intelligence (GAI) supports students in completing their writing tasks in Mandarin Chinese. In this study, 26 university-level Mandarin Chinese foreign language students completed two writing tasks on two…
Descriptors: Artificial Intelligence, Second Language Learning, Standardized Tests, Writing Tests
Alemi, Minoo; Miri, Mola; Mozafarnezhad, Alemeh – International Journal of Language Testing, 2019
Although group dynamic assessment (GDA) has been gaining attention over the recent decade, its applicability in online context has been left rather underexplored. Hence, the current study examined the effects of GDA on developing EFL learners' written grammatical accuracy in the online context of 'Telegram'. To this aim, 60 Iranian EFL students…
Descriptors: Alternative Assessment, Group Testing, English (Second Language), Second Language Learning
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Mirjam de Vreeze-Westgeest; Sara Mata; Francisca Serrano; Wilma Resing; Bart Vogelaar – European Journal of Psychology and Educational Research, 2023
The current study aimed to investigate the effectiveness of an online dynamic test in reading and writing, differentiating in typically developing children (n = 47) and children diagnosed with dyslexia (n = 30) aged between nine and twelve years. In doing so, it was analysed whether visual working memory, auditory working memory, inhibition,…
Descriptors: Computer Assisted Testing, Reading Tests, Writing Tests, Executive Function
Joshua Kloppers – International Journal of Computer-Assisted Language Learning and Teaching, 2023
Automated writing evaluation (AWE) software is an increasingly popular tool for English second language learners. However, research on the accuracy of such software has been both scarce and largely limited in its scope. As such, this article broadens the field of research on AWE accuracy by using a mixed design to holistically evaluate the…
Descriptors: Grammar, Automation, Writing Evaluation, Computer Assisted Instruction