Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 9 |
Since 2016 (last 10 years) | 23 |
Since 2006 (last 20 years) | 45 |
Descriptor
Computer Assisted Testing | 52 |
Scores | 52 |
Essays | 33 |
Writing Evaluation | 26 |
Scoring | 25 |
Essay Tests | 23 |
Writing Tests | 19 |
Correlation | 17 |
English (Second Language) | 14 |
Foreign Countries | 13 |
Automation | 11 |
More ▼ |
Source
Author
Lee, Yong-Won | 4 |
Attali, Yigal | 2 |
Breland, Hunter | 2 |
Deane, Paul | 2 |
Muraki, Eiji | 2 |
Sinharay, Sandip | 2 |
Uto, Masaki | 2 |
Wilson, Joshua | 2 |
Wolfe, Edward W. | 2 |
Zhang, Mo | 2 |
Abbasian, Gholam-Reza | 1 |
More ▼ |
Publication Type
Education Level
Audience
Location
United Kingdom | 3 |
China | 2 |
Australia | 1 |
Canada | 1 |
Finland | 1 |
France | 1 |
Germany | 1 |
Hong Kong | 1 |
India | 1 |
Indonesia | 1 |
Iran | 1 |
More ▼ |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Does not meet standards | 1 |
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Uto, Masaki; Aomi, Itsuki; Tsutsumi, Emiko; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2023
In automated essay scoring (AES), essays are automatically graded without human raters. Many AES models based on various manually designed features or various architectures of deep neural networks (DNNs) have been proposed over the past few decades. Each AES model has unique advantages and characteristics. Therefore, rather than using a single-AES…
Descriptors: Prediction, Scores, Computer Assisted Testing, Scoring
Shin, Jinnie; Gierl, Mark J. – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) technologies provide innovative solutions to score the written essays with a much shorter time span and at a fraction of the current cost. Traditionally, AES emphasized the importance of capturing the "coherence" of writing because abundant evidence indicated the connection between coherence and the overall…
Descriptors: Computer Assisted Testing, Scoring, Essays, Automation
Santi Lestari – Research Matters, 2024
Despite the increasing ubiquity of computer-based tests, many general qualifications examinations remain in a paper-based mode. Insufficient and unequal digital provision across schools is often identified as a major barrier to a full adoption of computer-based exams for general qualifications. One way to overcome this barrier is a gradual…
Descriptors: Keyboarding (Data Entry), Handwriting, Test Format, Comparative Analysis
Dhini, Bachriah Fatwa; Girsang, Abba Suganda; Sufandi, Unggul Utan; Kurniawati, Heny – Asian Association of Open Universities Journal, 2023
Purpose: The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes essay scoring, which is conducted through two parameters, semantic and keyword similarities, using a SentenceTransformers pre-trained model that can construct the…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Essays
Uto, Masaki; Okano, Masashi – IEEE Transactions on Learning Technologies, 2021
In automated essay scoring (AES), scores are automatically assigned to essays as an alternative to grading by humans. Traditional AES typically relies on handcrafted features, whereas recent studies have proposed AES models based on deep neural networks to obviate the need for feature engineering. Those AES models generally require training on a…
Descriptors: Essays, Scoring, Writing Evaluation, Item Response Theory
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Uzun, Kutay – Contemporary Educational Technology, 2018
Managing crowded classes in terms of classroom assessment is a difficult task due to the amount of time which needs to be devoted to providing feedback to student products. In this respect, the present study aimed to develop an automated essay scoring environment as a potential means to overcome this problem. Secondarily, the study aimed to test…
Descriptors: Computer Assisted Testing, Essays, Scoring, English Literature
Wind, Stefanie A.; Wolfe, Edward W.; Engelhard, George, Jr.; Foltz, Peter; Rosenstein, Mark – International Journal of Testing, 2018
Automated essay scoring engines (AESEs) are becoming increasingly popular as an efficient method for performance assessments in writing, including many language assessments that are used worldwide. Before they can be used operationally, AESEs must be "trained" using machine-learning techniques that incorporate human ratings. However, the…
Descriptors: Computer Assisted Testing, Essay Tests, Writing Evaluation, Scoring
Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine; Litman, Diane; Rahimi, Zahra; Kisa, Zahid – Reading Research Quarterly, 2020
Despite the importance of analytic text-based writing, relatively little is known about how to teach to this important skill. A persistent barrier to conducting research that would provide insight on best practices for teaching this form of writing is a lack of outcome measures that assess students' analytic text-based writing development and that…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Scoring
Zhang, Mo; Bennett, Randy E.; Deane, Paul; van Rijn, Peter W. – Educational Measurement: Issues and Practice, 2019
This study compared gender groups on the processes used in writing essays in an online assessment. Middle-school students from four grades responded to essays in two persuasive subgenres, argumentation and policy recommendation. Writing processes were inferred from four indicators extracted from students' keystroke logs. In comparison to males, on…
Descriptors: Gender Differences, Essays, Computer Assisted Testing, Persuasive Discourse
Wilson, Joshua; Chen, Dandan; Sandbank, Micheal P.; Hebert, Michael – Journal of Educational Psychology, 2019
The present study examined issues pertaining to the reliability of writing assessment in the elementary grades, and among samples of struggling and nonstruggling writers. The present study also extended nascent research on the reliability and the practical applications of automated essay scoring (AES) systems in Response to Intervention frameworks…
Descriptors: Computer Assisted Testing, Automation, Scores, Writing Tests
Nazerian, Samaneh; Abbasian, Gholam-Reza; Mohseni, Ahmad – Cogent Education, 2021
Despite growing interest in the studies on Zone of Proximal Development (ZPD), its operation in the forms of individualized and group-wide has been controversial. To cast some empirical light on the issue, this study was designed to study the applicability of the two scenarios of ZPD-based instructions to the writing accuracy of two levels of…
Descriptors: Sociocultural Patterns, Second Language Learning, Second Language Instruction, English (Second Language)
Zimmerman, Whitney Alicia; Kang, Hyun Bin; Kim, Kyung; Gao, Mengzhao; Johnson, Glenn; Clariana, Roy; Zhang, Fan – Journal of Statistics Education, 2018
Over two semesters short essay prompts were developed for use with the Graphical Interface for Knowledge Structure (GIKS), an automated essay scoring system. Participants were students in an undergraduate-level online introductory statistics course. The GIKS compares students' writing samples with an expert's to produce keyword occurrence and…
Descriptors: Undergraduate Students, Introductory Courses, Statistics, Computer Assisted Testing
Feifei Han; Zehua Wang – OTESSA Conference Proceedings, 2021
This study compared the effects of teacher feedback (TF) and online automated feedback (AF) on the quality of revision of English writing. It also examined the strengths and weaknesses of the two types of feedback perceived by English language learners (ELLs) as a foreign language (FL). Sixty-eight Chinese students from two English classes…
Descriptors: Comparative Analysis, Feedback (Response), English (Second Language), Second Language Instruction