Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 10 |
Descriptor
Source
Author
Alfter, David | 1 |
Attali, Yigal | 1 |
Barron, Colin | 1 |
Burstein, Jill | 1 |
Dessus, Philippe | 1 |
Dikli, Semire | 1 |
Enright, Mary K. | 1 |
Hadi-Tabassum, Samina | 1 |
Jobst, Jack | 1 |
Krug, Clara | 1 |
Lemaire, Benoit | 1 |
More ▼ |
Publication Type
Reports - Descriptive | 18 |
Journal Articles | 10 |
Speeches/Meeting Papers | 4 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Elementary Secondary Education | 3 |
Grade 12 | 2 |
Grade 8 | 2 |
Grade 4 | 1 |
Secondary Education | 1 |
Audience
Researchers | 1 |
Location
China | 1 |
Europe | 1 |
Sweden | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 2 |
What Works Clearinghouse Rating
Rebecca Hallman Martini – Writing Center Journal, 2023
Despite their history of marginalization, writing centers need to be spaces where consultants, writers, and administrators act with agency. This requires both knowing when and how to act, as well as deciding when to yield. In challenging policies of seeming neutrality, I argue in this manuscript that writing center practitioners can center the…
Descriptors: Writing Instruction, Writing (Composition), Laboratories, Writing Teachers
Volodina, Elena; Pilán, Ildikó; Alfter, David – Research-publishing.net, 2016
The paper describes initial efforts on creating a system for the automatic assessment of Swedish second language (L2) learner essays from two points of view: holistic evaluation of the reached level according to the Common European Framework of Reference (CEFR), and the lexical analysis of texts for receptive and productive vocabulary per CEFR…
Descriptors: Swedish, Second Language Learning, Classification, Essays
Institute of Education Sciences, 2017
In September, 2016, the National Center for Special Education Research (NCSER) and the National Center for Education Research (NCER) of the Institute of Education Sciences (IES) convened a group of experts to discuss and provide input on research needs in the area of middle and high school writing for students, including English learners (ELs) and…
Descriptors: Writing Research, Educational Research, Research Needs, Secondary Education
Hadi-Tabassum, Samina – Phi Delta Kappan, 2014
Schools are scrambling to prepare students for the writing assessments aligned to the Common Core State Standards. In some states, writing has not been assessed for over a decade. Yet, with the use of computerized grading of the student's writing, many teachers are wondering how to best prepare students for the writing assessments that will…
Descriptors: Computer Assisted Testing, Writing Tests, Standardized Tests, Core Curriculum
National Assessment of Educational Progress (NAEP), 2010
In today's society, writing with paper and pencil has largely been replaced by writing using a computer. Students are expected to compose on a computer as they move through school and into the workforce. Reflecting the changes in technology, eighth- and twelfth-grade students taking the National Assessment of Educational Progress (NAEP) writing…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, National Competency Tests
Enright, Mary K.; Quinlan, Thomas – Language Testing, 2010
E-rater[R] is an automated essay scoring system that uses natural language processing techniques to extract features from essays and to model statistically human holistic ratings. Educational Testing Service has investigated the use of e-rater, in conjunction with human ratings, to score one of the two writing tasks on the TOEFL-iBT[R] writing…
Descriptors: Second Language Learning, Scoring, Essays, Language Processing
National Assessment Governing Board, 2010
The purpose of the 2011 NAEP (National Assessment of Educational Progress) Writing Framework is to describe how the new NAEP Writing Assessment is designed to measure students' writing at grades 4, 8, and 12. As the ongoing national indicator of the academic achievement of students in the United States, NAEP regularly collects information on…
Descriptors: Writing Achievement, Writing Skills, Writing Evaluation, National Competency Tests

Lewiecki-Wilson, Cynthia; Sommers, Jeff; Tassoni, John Paul – Assessing Writing, 2000
Describes reasons for resisting computer editing tests and suggests possible problems with using only directed student self-placement in open access institutions. Presents a sample student profile to illustrate the interaction and negotiation among writing teachers as they read profiles and reach an agreement about their placement recommendation.…
Descriptors: Case Studies, Computer Assisted Testing, Higher Education, Rhetoric

Jobst, Jack – Journal of Teaching Writing, 1984
Describes how microcomputers can assist in grading the work of beginning writers, especially repetitive errors, and how students respond to this practice. (FL)
Descriptors: Computer Assisted Testing, Content Area Writing, Elementary Secondary Education, Grading
McCollum, Kelly – Chronicle of Higher Education, 1998
Developers of the Intelligent Essay Assessor claim that it saves time in evaluating college students' essays and improves the assessment. This and the growing number of other automated grading programs use the same technologies that make computer-based tutoring possible. Many academics remain skeptical of grading technologies, citing the…
Descriptors: Computer Assisted Testing, Computer Software, Computer Uses in Education, Essays
Merrill, Beverly; Peterson, Sarah – 1986
When the Mesa, Arizona Public Schools initiated an ambitious writing instruction program in 1978, two assessments based on student writing samples were developed. The first is based on a ninth grade proficiency test. If the student does not pass the test, high school remediation is provided. After 1987, students must pass this test in order to…
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Graduation Requirements, Holistic Evaluation

Lemaire, Benoit; Dessus, Philippe – Journal of Educational Computing Research, 2001
Describes Apex (Assistant for Preparing Exams), a tool for evaluating student essays based on their content. By comparing an essay and the text of a given course on a semantic basis, the system can measure how well the essay matches the text. Various assessments are presented to the student regarding the topic, outline, and coherence of the essay.…
Descriptors: Computer Assisted Testing, Computer Oriented Programs, Computer Uses in Education, Educational Technology

Marshall, Stewart; Barron, Colin – System, 1987
MARC (Methodical Assessment of Reports by Computer) is a report-marking program which enables teachers to provide individualized feedback on reports written by engineering students. The MARC system is objective in its consistent application of the same programed criteria, but also allows individual markers to supply their own comments as required.…
Descriptors: Computer Assisted Testing, Computer Software, Engineering, Evaluation Criteria
Krug, Clara – 1981
Based on the premise that teaching basic writing involves first understanding what tends to go wrong when students write, a computer assisted system of error prediction and analysis was designed to improve college students' writing skills in both English and French. Students were to complete a sequenced series of writing assignments first in…
Descriptors: College Students, Computer Assisted Testing, English, Error Analysis (Language)
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Previous Page | Next Page »
Pages: 1 | 2