Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 10 |
Descriptor
Source
Author
Darling-Hammond, Linda | 2 |
Alfter, David | 1 |
Allen, Laura K. | 1 |
Attali, Yigal | 1 |
Burstein, Jill | 1 |
Dessus, Philippe | 1 |
DiVesta, Francis J. | 1 |
Dikli, Semire | 1 |
Enright, Mary K. | 1 |
Haberman, Shelby J. | 1 |
Hadi-Tabassum, Samina | 1 |
More ▼ |
Publication Type
Reports - Descriptive | 13 |
Journal Articles | 7 |
Books | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 5 |
Postsecondary Education | 5 |
Elementary Secondary Education | 4 |
High Schools | 2 |
Secondary Education | 2 |
Audience
Location
Australia | 2 |
Connecticut | 2 |
New Hampshire | 2 |
New York | 2 |
Rhode Island | 2 |
United Kingdom (England) | 2 |
Vermont | 2 |
Europe | 1 |
Singapore | 1 |
Sweden | 1 |
Laws, Policies, & Programs
Every Student Succeeds Act… | 2 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
National Assessment of… | 3 |
New York State Regents… | 2 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Volodina, Elena; Pilán, Ildikó; Alfter, David – Research-publishing.net, 2016
The paper describes initial efforts on creating a system for the automatic assessment of Swedish second language (L2) learner essays from two points of view: holistic evaluation of the reached level according to the Common European Framework of Reference (CEFR), and the lexical analysis of texts for receptive and productive vocabulary per CEFR…
Descriptors: Swedish, Second Language Learning, Classification, Essays
Allen, Laura K.; Jacovina, Matthew E.; McNamara, Danielle S. – Grantee Submission, 2016
The development of strong writing skills is a critical (and somewhat obvious) goal within the classroom. Individuals across the world are now expected to reach a high level of writing proficiency to achieve success in both academic settings and the workplace (Geiser & Studley, 2001; Powell, 2009; Sharp, 2007). Unfortunately, strong writing…
Descriptors: Writing Skills, Writing Instruction, Writing Strategies, Teaching Methods
Hadi-Tabassum, Samina – Phi Delta Kappan, 2014
Schools are scrambling to prepare students for the writing assessments aligned to the Common Core State Standards. In some states, writing has not been assessed for over a decade. Yet, with the use of computerized grading of the student's writing, many teachers are wondering how to best prepare students for the writing assessments that will…
Descriptors: Computer Assisted Testing, Writing Tests, Standardized Tests, Core Curriculum
Darling-Hammond, Linda – Learning Policy Institute, 2017
After passage of the Every Student Succeeds Act (ESSA) in 2015, states assumed greater responsibility for designing their own accountability and assessment systems. ESSA requires states to measure "higher order thinking skills and understanding" and encourages the use of open-ended performance assessments, which are essential for…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Darling-Hammond, Linda – Council of Chief State School Officers, 2017
The Every Student Succeeds Act (ESSA) opened up new possibilities for how student and school success are defined and supported in American public education. States have greater responsibility for designing and building their assessment and accountability systems. These new opportunities to develop performance assessments are critically important…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Haberman, Shelby J. – Educational Testing Service, 2011
Alternative approaches are discussed for use of e-rater[R] to score the TOEFL iBT[R] Writing test. These approaches involve alternate criteria. In the 1st approach, the predicted variable is the expected rater score of the examinee's 2 essays. In the 2nd approach, the predicted variable is the expected rater score of 2 essay responses by the…
Descriptors: Writing Tests, Scoring, Essays, Language Tests
Enright, Mary K.; Quinlan, Thomas – Language Testing, 2010
E-rater[R] is an automated essay scoring system that uses natural language processing techniques to extract features from essays and to model statistically human holistic ratings. Educational Testing Service has investigated the use of e-rater, in conjunction with human ratings, to score one of the two writing tasks on the TOEFL-iBT[R] writing…
Descriptors: Second Language Learning, Scoring, Essays, Language Processing
Shermis, Mark D.; DiVesta, Francis J. – Rowman & Littlefield Publishers, Inc., 2011
"Classroom Assessment in Action" clarifies the multi-faceted roles of measurement and assessment and their applications in a classroom setting. Comprehensive in scope, Shermis and Di Vesta explain basic measurement concepts and show students how to interpret the results of standardized tests. From these basic concepts, the authors then…
Descriptors: Student Evaluation, Standardized Tests, Scores, Measurement
McCollum, Kelly – Chronicle of Higher Education, 1998
Developers of the Intelligent Essay Assessor claim that it saves time in evaluating college students' essays and improves the assessment. This and the growing number of other automated grading programs use the same technologies that make computer-based tutoring possible. Many academics remain skeptical of grading technologies, citing the…
Descriptors: Computer Assisted Testing, Computer Software, Computer Uses in Education, Essays

Lemaire, Benoit; Dessus, Philippe – Journal of Educational Computing Research, 2001
Describes Apex (Assistant for Preparing Exams), a tool for evaluating student essays based on their content. By comparing an essay and the text of a given course on a semantic basis, the system can measure how well the essay matches the text. Various assessments are presented to the student regarding the topic, outline, and coherence of the essay.…
Descriptors: Computer Assisted Testing, Computer Oriented Programs, Computer Uses in Education, Educational Technology
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation

O'Neill, Paula N. – Journal of Dental Education, 1998
Examines various methods for assessing dental students' learning in a problem-based curriculum, including objective structured clinical examination; clinical proficiency testing; triple jump evaluation (identifying facts, developing hypotheses, establishing learning needs to further evaluate the problem, solving the learning needs, presenting…
Descriptors: Allied Health Occupations Education, Clinical Teaching (Health Professions), Computer Assisted Testing, Curriculum Design