Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 28 |
Descriptor
Source
ETS Research Report Series | 29 |
Author
Attali, Yigal | 6 |
Breyer, F. Jay | 3 |
Sinharay, Sandip | 3 |
Bridgeman, Brent | 2 |
Deane, Paul | 2 |
Ramineni, Chaitanya | 2 |
Sawaki, Yasuyo | 2 |
Williamson, David M. | 2 |
Xi, Xiaoming | 2 |
Zhang, Mo | 2 |
Adler, Rachel M. | 1 |
More ▼ |
Publication Type
Journal Articles | 29 |
Reports - Research | 28 |
Tests/Questionnaires | 7 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 12 |
Postsecondary Education | 10 |
Secondary Education | 3 |
Junior High Schools | 2 |
Middle Schools | 2 |
Elementary Education | 1 |
Grade 8 | 1 |
High Schools | 1 |
Audience
Location
China | 3 |
United States | 2 |
Arizona | 1 |
California (Los Angeles) | 1 |
Canada | 1 |
Florida | 1 |
Georgia | 1 |
Indiana | 1 |
Nevada | 1 |
North Carolina (Charlotte) | 1 |
Pennsylvania (Philadelphia) | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 17 |
Graduate Record Examinations | 5 |
Praxis Series | 1 |
What Works Clearinghouse Rating
Wang, Wei; Dorans, Neil J. – ETS Research Report Series, 2021
Agreement statistics and measures of prediction accuracy are often used to assess the quality of two measures of a construct. Agreement statistics are appropriate for measures that are supposed to be interchangeable, whereas prediction accuracy statistics are appropriate for situations where one variable is the target and the other variables are…
Descriptors: Classification, Scaling, Prediction, Accuracy
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Papageorgiou, Spiros; Wu, Sha; Hsieh, Ching-Ni; Tannenbaum, Richard J.; Cheng, Mengmeng – ETS Research Report Series, 2019
The past decade has seen an emerging interest in mapping (aligning or linking) test scores to language proficiency levels of external performance scales or frameworks, such as the Common European Framework of Reference (CEFR), as well as locally developed frameworks, such as China's Standards of English Language Ability (CSE). Such alignment is…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing
Chen, Jing; Zhang, Mo; Bejar, Isaac I. – ETS Research Report Series, 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essay Tests
O'Dwyer, John; Kantarcioglu, Elif; Thomas, Carole – ETS Research Report Series, 2018
This study reports on an investigation of the predictive validity of the TOEFL iBT®test in an English-medium institution (EMI) in a non-target-language context, namely, Turkey. The relationship between TOEFL iBT scores and academic performance was explored in a cohort of 286 undergraduate students, as was the TOEFL iBT's relationship with an…
Descriptors: Predictive Validity, Computer Assisted Testing, Grade Point Average, Language of Instruction
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Yu, Guoxing; He, Lianzhen; Rea-Dickins, Pauline; Kiely, Richard; Lu, Yanbin; Zhang, Jing; Zhang, Yan; Xu, Shasha; Fang, Lin – ETS Research Report Series, 2017
Language test preparation has often been studied within the consequential validity framework in relation to ethics, equity, fairness, and washback of assessment. The use of independent and integrated speaking tasks in the "TOEFL iBT"® test represents a significant development and innovation in assessing speaking ability in academic…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Oral Language
Almond, Russell; Deane, Paul; Quinlan, Thomas; Wagner, Michael; Sydorenko, Tetyana – ETS Research Report Series, 2012
The Fall 2007 and Spring 2008 pilot tests for the "CBAL"™ Writing assessment included experimental keystroke logging capabilities. This report documents the approaches used to capture the keystroke logs and the algorithms used to process the outputs. It also includes some preliminary findings based on the pilot data. In particular, it…
Descriptors: Timed Tests, Writing Tests, Computer Assisted Testing, Keyboarding (Data Entry)
Naemi, Bobby; Seybert, Jacob; Robbins, Steven; Kyllonen, Patrick – ETS Research Report Series, 2014
This report introduces the "WorkFORCE"™ Assessment for Job Fit, a personality assessment utilizing the "FACETS"™ core capability, which is based on innovations in forced-choice assessment and computer adaptive testing. The instrument is derived from the fivefactor model (FFM) of personality and encompasses a broad spectrum of…
Descriptors: Personality Assessment, Personality Traits, Personality Measures, Test Validity
Evanini, Keelan; Heilman, Michael; Wang, Xinhao; Blanchard, Daniel – ETS Research Report Series, 2015
This report describes the initial automated scoring results that were obtained using the constructed responses from the Writing and Speaking sections of the pilot forms of the "TOEFL Junior"® Comprehensive test administered in late 2011. For all of the items except one (the edit item in the Writing section), existing automated scoring…
Descriptors: Computer Assisted Testing, Automation, Language Tests, Second Language Learning
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Barkaoui, Khaled – ETS Research Report Series, 2015
This study aimed to describe the writing activities that test takers engage in when responding to the writing tasks in the "TOEFL iBT"[superscript R] test and to examine the effects of task type and test-taker English language proficiency (ELP) and keyboarding skills on the frequency and distribution of these activities. Each of 22 test…
Descriptors: Second Language Learning, Language Tests, English (Second Language), Writing Instruction
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Deane, Paul – ETS Research Report Series, 2014
This paper explores automated methods for measuring features of student writing and determining their relationship to writing quality and other features of literacy, such as reading rest scores. In particular, it uses the "e-rater"™ automatic essay scoring system to measure "product" features (measurable traits of the final…
Descriptors: Writing Processes, Writing Evaluation, Student Evaluation, Writing Skills
Previous Page | Next Page »
Pages: 1 | 2