Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 21 |
Descriptor
Computer Assisted Testing | 25 |
Statistical Analysis | 25 |
Essays | 19 |
Correlation | 13 |
Scores | 11 |
Scoring | 11 |
Writing Evaluation | 11 |
Essay Tests | 9 |
Feedback (Response) | 9 |
Writing Tests | 9 |
Foreign Countries | 8 |
More ▼ |
Source
Author
Attali, Yigal | 2 |
Ramineni, Chaitanya | 2 |
Williamson, David M. | 2 |
Alexander, Melody W. | 1 |
Bai, Lifang | 1 |
Belur, Vinetha | 1 |
Ben-Simon, Anat | 1 |
Bonett, John | 1 |
Breyer, F. Jay | 1 |
Bridgeman, Brent | 1 |
Brown, Michelle Stallone | 1 |
More ▼ |
Publication Type
Journal Articles | 23 |
Reports - Research | 20 |
Reports - Evaluative | 3 |
Tests/Questionnaires | 2 |
Books | 1 |
Collected Works - Proceedings | 1 |
Guides - Classroom - Teacher | 1 |
Education Level
Audience
Practitioners | 1 |
Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 3 |
Graduate Record Examinations | 2 |
Praxis Series | 1 |
Program for International… | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Cohen, Yoav; Levi, Effi; Ben-Simon, Anat – Applied Measurement in Education, 2018
In the current study, two pools of 250 essays, all written as a response to the same prompt, were rated by two groups of raters (14 or 15 raters per group), thereby providing an approximation to the essay's true score. An automated essay scoring (AES) system was trained on the datasets and then scored the essays using a cross-validation scheme. By…
Descriptors: Test Validity, Automation, Scoring, Computer Assisted Testing
Zimmerman, Whitney Alicia; Kang, Hyun Bin; Kim, Kyung; Gao, Mengzhao; Johnson, Glenn; Clariana, Roy; Zhang, Fan – Journal of Statistics Education, 2018
Over two semesters short essay prompts were developed for use with the Graphical Interface for Knowledge Structure (GIKS), an automated essay scoring system. Participants were students in an undergraduate-level online introductory statistics course. The GIKS compares students' writing samples with an expert's to produce keyword occurrence and…
Descriptors: Undergraduate Students, Introductory Courses, Statistics, Computer Assisted Testing
Seifried, Eva; Lenhard, Wolfgang; Spinath, Birgit – Journal of Educational Computing Research, 2017
Writing essays and receiving feedback can be useful for fostering students' learning and motivation. When faced with large class sizes, it is desirable to identify students who might particularly benefit from feedback. In this article, we tested the potential of Latent Semantic Analysis (LSA) for identifying poor essays. A total of 14 teaching…
Descriptors: Computer Assisted Testing, Computer Software, Essays, Writing Evaluation
Klobucar, Andrew; Elliot, Norbert; Deess, Perry; Rudniy, Oleksandr; Joshi, Kamal – Assessing Writing, 2013
This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the "Criterion"[R] Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation,…
Descriptors: Writing Evaluation, Scoring, Writing Instruction, Essays
Buzick, Heather; Oliveri, Maria Elena; Attali, Yigal; Flor, Michael – Applied Measurement in Education, 2016
Automated essay scoring is a developing technology that can provide efficient scoring of large numbers of written responses. Its use in higher education admissions testing provides an opportunity to collect validity and fairness evidence to support current uses and inform its emergence in other areas such as K-12 large-scale assessment. In this…
Descriptors: Essays, Learning Disabilities, Attention Deficit Hyperactivity Disorder, Scoring
Mao, Liyang; Liu, Ou Lydia; Roohr, Katrina; Belur, Vinetha; Mulholland, Matthew; Lee, Hee-Sun; Pallant, Amy – Educational Assessment, 2018
Scientific argumentation is one of the core practices for teachers to implement in science classrooms. We developed a computer-based formative assessment to support students' construction and revision of scientific arguments. The assessment is built upon automated scoring of students' arguments and provides feedback to students and teachers.…
Descriptors: Computer Assisted Testing, Science Tests, Scoring, Automation
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Bai, Lifang; Hu, Guangwei – Educational Psychology, 2017
Automated writing evaluation (AWE) systems can provide immediate computer-generated quantitative assessments and qualitative diagnostic feedback on an enormous number of submitted essays. However, limited research attention has been paid to locally designed AWE systems used in English as a foreign language (EFL) classroom contexts. This study…
Descriptors: Computer Assisted Testing, Writing Evaluation, Automation, Essay Tests
Ma, Hong; Slater, Tammy – CALICO Journal, 2016
This study utilized a theory proposed by Mohan, Slater, Luo, and Jaipal (2002) regarding the Developmental Path of Cause to investigate AWE score use in classroom contexts. This "path" has the potential to support validity arguments because it suggests how causal linguistic features can be organized in hierarchical order. Utilization of…
Descriptors: Scores, Automation, Writing Evaluation, Computer Assisted Testing
Hoang, Giang Thi Linh; Kunnan, Antony John – Language Assessment Quarterly, 2016
Computer technology made its way into writing instruction and assessment with spelling and grammar checkers decades ago, but more recently it has done so with automated essay evaluation (AEE) and diagnostic feedback. And although many programs and tools have been developed in the last decade, not enough research has been conducted to support or…
Descriptors: Case Studies, Essays, Writing Evaluation, English (Second Language)
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Ockey, Gary J. – Modern Language Journal, 2009
Computer-based testing (CBT) to assess second language ability has undergone remarkable development since Garret (1991) described its purpose as "the computerized administration of conventional tests" in "The Modern Language Journal." For instance, CBT has made possible the delivery of more authentic tests than traditional paper-and-pencil tests.…
Descriptors: Second Language Learning, Adaptive Testing, Computer Assisted Testing, Language Aptitude
Davies, Phil – Assessment & Evaluation in Higher Education, 2009
This article details the implementation and use of a "Review Stage" within the CAP (computerised assessment by peers) tool as part of the assessment process for a post-graduate module in e-learning. It reports upon the effect of providing the students with a "second chance" in marking and commenting their peers' essays having been able to view the…
Descriptors: Feedback (Response), Student Evaluation, Computer Assisted Testing, Peer Evaluation
Lipnevich, Anastasiya A.; Smith, Jeffrey K. – ETS Research Report Series, 2008
This experiment involved college students (N = 464) working on an authentic learning task (writing an essay) under 3 conditions: no feedback, detailed feedback (perceived by participants to be provided by the course instructor), and detailed feedback (perceived by participants to be computer generated). Additionally, conditions were crossed with 2…
Descriptors: Feedback (Response), Information Sources, College Students, Essays
Coniam, David – Educational Research and Evaluation, 2009
This paper describes a study comparing paper-based marking (PBM) and onscreen marking (OSM) in Hong Kong utilising English language essay scripts drawn from the live 2007 Hong Kong Certificate of Education Examination (HKCEE) Year 11 English Language Writing Paper. In the study, 30 raters from the 2007 HKCEE Writing Paper marked on paper 100…
Descriptors: Student Attitudes, Foreign Countries, Essays, Comparative Analysis
Previous Page | Next Page »
Pages: 1 | 2