Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Author
Attali, Yigal | 3 |
Abbasian, Gholam-Reza | 1 |
Breland, Hunter | 1 |
Bridgeman, Brent | 1 |
Burstein, Jill | 1 |
Golub-Smith, Marna | 1 |
Lee, Yong-Won | 1 |
Mohseni, Ahmad | 1 |
Muraki, Eiji | 1 |
Nazerian, Samaneh | 1 |
Trapani, Catherine | 1 |
More ▼ |
Publication Type
Reports - Research | 5 |
Journal Articles | 4 |
Numerical/Quantitative Data | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Elementary Education | 1 |
Grade 10 | 1 |
Grade 11 | 1 |
Grade 12 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
High Schools | 1 |
More ▼ |
Audience
Location
Iran | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 6 |
Graduate Management Admission… | 1 |
Graduate Record Examinations | 1 |
Raven Progressive Matrices | 1 |
Test of Written English | 1 |
What Works Clearinghouse Rating
Nazerian, Samaneh; Abbasian, Gholam-Reza; Mohseni, Ahmad – Cogent Education, 2021
Despite growing interest in the studies on Zone of Proximal Development (ZPD), its operation in the forms of individualized and group-wide has been controversial. To cast some empirical light on the issue, this study was designed to study the applicability of the two scenarios of ZPD-based instructions to the writing accuracy of two levels of…
Descriptors: Sociocultural Patterns, Second Language Learning, Second Language Instruction, English (Second Language)
Bridgeman, Brent; Trapani, Catherine; Attali, Yigal – Applied Measurement in Education, 2012
Essay scores generated by machine and by human raters are generally comparable; that is, they can produce scores with similar means and standard deviations, and machine scores generally correlate as highly with human scores as scores from one human correlate with scores from another human. Although human and machine essay scores are highly related…
Descriptors: Scoring, Essay Tests, College Entrance Examinations, High Stakes Tests
Golub-Smith, Marna; And Others – 1993
The Test of Written English (TWE), administered with certain designated examinations of the Test of English as a Foreign Language (TOEFL), consists of a single essay prompt to which examinees have 30 minutes to respond. Questions have been raised about the comparability of different TWE prompts. This study was designed to elicit essays for prompts…
Descriptors: Charts, Comparative Analysis, English (Second Language), Essay Tests
Lee, Yong-Won; Breland, Hunter; Muraki, Eiji – 2002
Since the writing section of the Test of English as a Foreign Language (TOEFL) computer based test (CBT) is a single-prompt essay test, it is very important to ensure that each prompt is as fair as possible to any subgroups of examinees, such as those with different native language backgrounds. A particular topic of interest in this study is the…
Descriptors: Comparative Analysis, Computer Assisted Testing, English (Second Language), Essay Tests
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)
Attali, Yigal; Burstein, Jill – ETS Research Report Series, 2005
The e-rater® system has been used by ETS for automated essay scoring since 1999. This paper describes a new version of e-rater (v.2.0) that differs from the previous one (v.1.3) with regard to the feature set and model building approach. The paper describes the new version, compares the new and previous versions in terms of performance, and…
Descriptors: Essay Tests, Automation, Scoring, Comparative Analysis