NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Evaluative20
Journal Articles16
Numerical/Quantitative Data2
Audience
Practitioners1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Behizadeh, Nadia; Lynch, Tom Liam – Berkeley Review of Education, 2017
For the last century, the quality of large-scale assessment in the United States has been undermined by narrow educational theory and hindered by limitations in technology. As a result, poor assessment practices have encouraged low-level instructional practices that disparately affect students from the most disadvantaged communities and schools.…
Descriptors: Equal Education, Measurement, Educational Theories, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – Applied Linguistics, 2010
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multi-trait) rating dimensions and their relationships to holistic scores and "e-rater"[R] essay feature variables in the context of the TOEFL[R] computer-based test (TOEFL CBT) writing assessment. Data analyzed in the study were holistic…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Alderson, J. Charles – Language Testing, 2009
In this article, the author reviews the TOEFL iBT which is the latest version of the TOEFL, whose history stretches back to 1961. The TOEFL iBT was introduced in the USA, Canada, France, Germany and Italy in late 2005. Currently the TOEFL test is offered in two testing formats: (1) Internet-based testing (iBT); and (2) paper-based testing (PBT).…
Descriptors: Oral Language, Writing Tests, Listening Comprehension Tests, Test Reviews
Peer reviewed Peer reviewed
PDF on ERIC Download full text
National Center for Education Statistics, 2012
This report presents results of the 2011 National Assessment of Educational Progress (NAEP) in writing at grades 8 and 12. In this new national writing assessment sample, 24,100 eighth-graders and 28,100 twelfth-graders engaged with writing tasks and composed their responses on computer. The assessment tasks reflected writing situations common to…
Descriptors: National Competency Tests, Writing Tests, Grade 8, Grade 12
Peer reviewed Peer reviewed
Direct linkDirect link
Coniam, David – ReCALL, 2009
This paper describes a study of the computer essay-scoring program BETSY. While the use of computers in rating written scripts has been criticised in some quarters for lacking transparency or lack of fit with how human raters rate written scripts, a number of essay rating programs are available commercially, many of which claim to offer comparable…
Descriptors: Writing Tests, Scoring, Foreign Countries, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Breland, Hunter; Lee, Yong-Won – Applied Measurement in Education, 2007
The objective of the present investigation was to examine the comparability of writing prompts for different gender groups in the context of the computer-based Test of English as a Foreign Language[TM] (TOEFL[R]-CBT). A total of 87 prompts administered from July 1998 through March 2000 were analyzed. An extended version of logistic regression for…
Descriptors: Learning Theories, Writing Evaluation, Writing Tests, Second Language Learning
Peer reviewed Peer reviewed
Davey, Tim; And Others – Journal of Educational Measurement, 1997
The development and scoring of a recently introduced computer-based writing skills test is described. The test asks the examinee to edit a writing passage presented on a computer screen. Scoring difficulties are addressed through the combined use of option weighting and the sequential probability ratio test. (SLD)
Descriptors: Computer Assisted Testing, Educational Innovation, Probability, Scoring
Chung, Gregory K. W. K.; O'Neil, Harold F., Jr. – 1997
This report examines the feasibility of scoring essays using computer-based techniques. Essays have been incorporated into many of the standardized testing programs. Issues of validity and reliability must be addressed to deploy automated approaches to scoring fully. Two approaches that have been used to classify documents, surface- and word-based…
Descriptors: Automation, Computer Assisted Testing, Essays, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
James, Cindy L. – Assessing Writing, 2006
How do scores from writing samples generated by computerized essay scorers compare to those generated by ''untrained'' human scorers and what combination of scores, if any, is more accurate at placing students in composition courses? This study endeavored to answer this two-part question by evaluating the correspondence between writing sample…
Descriptors: Writing (Composition), Predictive Validity, Scoring, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas, P. L. – English Journal, 2005
The advantages and limitations of using computers in writing assessment and instruction are discussed. The English teachers feel that even though computers and computer programs offer huge benefits for the teaching of writing to students, they cannot be used as a substitute for humans in the ultimate evaluation of a composition written by them.
Descriptors: Writing Evaluation, Writing Tests, High Stakes Tests, English Teachers
Previous Page | Next Page ยป
Pages: 1  |  2