NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 29 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Andersen, Øistein E.; Yuan, Zheng; Watson, Rebecca; Cheung, Kevin Yet Fong – International Educational Data Mining Society, 2021
Automated essay scoring (AES), where natural language processing is applied to score written text, can underpin educational resources in blended and distance learning. AES performance has typically been reported in terms of correlation coefficients or agreement statistics calculated between a system and an expert human examiner. We describe the…
Descriptors: Evaluation Methods, Scoring, Essays, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Doewes, Afrizal; Saxena, Akrati; Pei, Yulong; Pechenizkiy, Mykola – International Educational Data Mining Society, 2022
In Automated Essay Scoring (AES) systems, many previous works have studied group fairness using the demographic features of essay writers. However, individual fairness also plays an important role in fair evaluation and has not been yet explored. Initialized by Dwork et al., the fundamental concept of individual fairness is "similar people…
Descriptors: Scoring, Essays, Writing Evaluation, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lu, Chang; Cutumisu, Maria – International Educational Data Mining Society, 2021
Digitalization and automation of test administration, score reporting, and feedback provision have the potential to benefit large-scale and formative assessments. Many studies on automated essay scoring (AES) and feedback generation systems were published in the last decade, but few connected AES and feedback generation within a unified framework.…
Descriptors: Learning Processes, Automation, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Jiang, Yaoyi – British Journal of Educational Technology, 2015
The Report of Chinese Students' English Writing Ability (2014) focuses on the Chinese students' English writing in the automated essay-evaluation context. The data and samples are primarily from a national-wide writing project involving 300,814 English as a Foreign Language participants from 452 schools in China during a period of April 10 to May…
Descriptors: Foreign Countries, Writing Ability, Essays, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Martin; Nadas, Rita; Bell, John F. – British Journal of Educational Technology, 2010
There is a growing body of research literature that considers how the mode of assessment, either computer-based or paper-based, might affect candidates' performances. Despite this, there is a fairly narrow literature that shifts the focus of attention to those making assessment judgements and which considers issues of assessor consistency when…
Descriptors: English Literature, Examiners, Evaluation Research, Evaluators
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Darling-Hammond, Linda – Learning Policy Institute, 2017
After passage of the Every Student Succeeds Act (ESSA) in 2015, states assumed greater responsibility for designing their own accountability and assessment systems. ESSA requires states to measure "higher order thinking skills and understanding" and encourages the use of open-ended performance assessments, which are essential for…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Darling-Hammond, Linda – Council of Chief State School Officers, 2017
The Every Student Succeeds Act (ESSA) opened up new possibilities for how student and school success are defined and supported in American public education. States have greater responsibility for designing and building their assessment and accountability systems. These new opportunities to develop performance assessments are critically important…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deane, Paul – ETS Research Report Series, 2014
This paper explores automated methods for measuring features of student writing and determining their relationship to writing quality and other features of literacy, such as reading rest scores. In particular, it uses the "e-rater"™ automatic essay scoring system to measure "product" features (measurable traits of the final…
Descriptors: Writing Processes, Writing Evaluation, Student Evaluation, Writing Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Blanchard, Daniel; Tetreault, Joel; Higgins, Derrick; Cahill, Aoife; Chodorow, Martin – ETS Research Report Series, 2013
This report presents work on the development of a new corpus of non-native English writing. It will be useful for the task of native language identification, as well as grammatical error detection and correction, and automatic essay scoring. In this report, the corpus is described in detail.
Descriptors: Language Tests, Second Language Learning, English (Second Language), Writing Tests
Westhuizen, Duan vd – Commonwealth of Learning, 2016
This work starts with a brief overview of education in developing countries, to contextualise the use of the guidelines. Although this document is intended to be a practical tool, it is necessary to include some theoretical analysis of the concept of online assessment. This is given in Sections 3 and 4, together with the identification and…
Descriptors: Guidelines, Student Evaluation, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wolf, Kenneth; Dunlap, Joanna; Stevens, Ellen – Journal of Effective Teaching, 2012
This article describes ten key assessment practices for advancing student learning that all professors should be familiar with and strategically incorporate in their classrooms and programs. Each practice or concept is explained with examples and guidance for putting it into practice. The ten are: learning outcomes, performance assessments,…
Descriptors: Educational Assessment, Student Evaluation, Educational Practices, Outcomes of Education
Peer reviewed Peer reviewed
Direct linkDirect link
Chao, K.-J.; Hung, I.-C.; Chen, N.-S. – Journal of Computer Assisted Learning, 2012
Online learning has been rapidly developing in the last decade. However, there is very little literature available about the actual adoption of online synchronous assessment approaches and any guidelines for effective assessment design and implementation. This paper aims at designing and evaluating the possibility of applying online synchronous…
Descriptors: Electronic Learning, Student Evaluation, Online Courses, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Davies, Phil – Assessment & Evaluation in Higher Education, 2009
This article details the implementation and use of a "Review Stage" within the CAP (computerised assessment by peers) tool as part of the assessment process for a post-graduate module in e-learning. It reports upon the effect of providing the students with a "second chance" in marking and commenting their peers' essays having been able to view the…
Descriptors: Feedback (Response), Student Evaluation, Computer Assisted Testing, Peer Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Kobrin, Jennifer L.; Deng, Hui; Shaw, Emily J. – Journal of Applied Testing Technology, 2007
This study was designed to address two frequent criticisms of the SAT essay--that essay length is the best predictor of scores, and that there is an advantage in using more "sophisticated" examples as opposed to personal experience. The study was based on 2,820 essays from the first three administrations of the new SAT. Each essay was…
Descriptors: Testing Programs, Computer Assisted Testing, Construct Validity, Writing Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Shaw, Stuart – E-Learning, 2008
Computer-assisted assessment offers many benefits over traditional paper methods. However, in transferring from one medium to another, it is crucial to ascertain the extent to which the new medium may alter the nature of traditional assessment practice or affect marking reliability. Whilst there is a substantial body of research comparing marking…
Descriptors: Construct Validity, Writing Instruction, Computer Assisted Testing, Student Evaluation
Previous Page | Next Page »
Pages: 1  |  2