Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 10 |
Descriptor
College Entrance Examinations | 10 |
Natural Language Processing | 10 |
Reading Comprehension | 4 |
Automation | 3 |
Computational Linguistics | 3 |
Computer Assisted Testing | 3 |
English (Second Language) | 3 |
Foreign Countries | 3 |
Graduate Study | 3 |
Language Tests | 3 |
Scoring | 3 |
More ▼ |
Source
ETS Research Report Series | 3 |
CEA Forum | 1 |
Computer Assisted Language… | 1 |
Grantee Submission | 1 |
Innovations in Education and… | 1 |
International Journal of… | 1 |
International Journal of… | 1 |
Language Testing | 1 |
Author
Bejar, Isaac I. | 2 |
Crossley, Scott A. | 2 |
McNamara, Danielle S. | 2 |
Allen, Laura K. | 1 |
Brown, Kevin | 1 |
Chen, Jing | 1 |
Futagi, Yoko | 1 |
Gierl, Mark J. | 1 |
Goh, Tiong-Thye | 1 |
Hemat, Ramin | 1 |
Kostin, Irene | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 9 |
Numerical/Quantitative Data | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 10 |
Postsecondary Education | 9 |
Grade 10 | 1 |
High Schools | 1 |
Secondary Education | 1 |
Audience
Location
China | 2 |
South Korea | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 3 |
Dale Chall Readability Formula | 1 |
Flesch Kincaid Grade Level… | 1 |
Flesch Reading Ease Formula | 1 |
Praxis Series | 1 |
SAT (College Admission Test) | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Teymoor Khosravi; Zainab M. Al Sudani; Morteza Oladnabi – Innovations in Education and Teaching International, 2024
OpenAI's ChatGPT, is a conversational chatbot that uses Generative Pre-trained Transformer or GPT language model to mimic human-like responses. Here we evaluated its performance in providing responses to genetics questions across five different tasks including solid genetic basics, identifying inheritance pattern based on described pedigrees,…
Descriptors: Artificial Intelligence, Technology Uses in Education, Natural Language Processing, Genetics
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Yu, Xiaoli – International Journal of Language Testing, 2021
This study examined the development of text complexity for the past 25 years of reading comprehension passages in the National Matriculation English Test (NMET) in China. Text complexity of 206 reading passages at lexical, syntactic, and discourse levels has been measured longitudinally and compared across the years. The natural language…
Descriptors: Reading Comprehension, Reading Tests, Difficulty Level, Natural Language Processing
Goh, Tiong-Thye; Sun, Hui; Yang, Bing – Computer Assisted Language Learning, 2020
This study investigates the extent to which microfeatures -- such as basic text features, readability, cohesion, and lexical diversity based on specific word lists -- affect Chinese EFL writing quality. Data analysis was conducted using natural language processing, correlation analysis and stepwise multiple regression analysis on a corpus of 268…
Descriptors: Essays, Writing Tests, English (Second Language), Second Language Learning
Chen, Jing; Zhang, Mo; Bejar, Isaac I. – ETS Research Report Series, 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essay Tests
Allen, Laura K.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2015
We investigated linguistic factors that relate to misalignment between students' and teachers' ratings of essay quality. Students (n = 126) wrote essays and rated the quality of their work. Teachers then provided their own ratings of the essays. Results revealed that students who were less accurate in their self-assessments produced essays that…
Descriptors: Essays, Scores, Natural Language Processing, Interrater Reliability
Kyle, Kristopher; Crossley, Scott A.; McNamara, Danielle S. – Language Testing, 2016
This study explores the construct validity of speaking tasks included in the TOEFL iBT (e.g., integrated and independent speaking tasks). Specifically, advanced natural language processing (NLP) tools, MANOVA difference statistics, and discriminant function analyses (DFA) are used to assess the degree to which and in what ways responses to these…
Descriptors: Construct Validity, Natural Language Processing, Speech Skills, Speech Acts
Bejar, Isaac I.; VanWinkle, Waverely; Madnani, Nitin; Lewis, William; Steier, Michael – ETS Research Report Series, 2013
The paper applies a natural language computational tool to study a potential construct-irrelevant response strategy, namely the use of "shell language." Although the study is motivated by the impending increase in the volume of scoring of students responses from assessments to be developed in response to the Race to the Top initiative,…
Descriptors: Responses, Language Usage, Natural Language Processing, Computational Linguistics
Brown, Kevin – CEA Forum, 2015
In this article, the author describes his project to take every standardized exam English majors students take. During the summer and fall semesters of 2012, the author signed up for and took the GRE General Test, the Praxis Content Area Exam (English Language, Literature, and Composition: Content Knowledge), the Senior Major Field Tests in…
Descriptors: College Faculty, College English, Test Preparation, Standardized Tests
Sheehan, Kathleen M.; Kostin, Irene; Futagi, Yoko; Hemat, Ramin; Zuckerman, Daniel – ETS Research Report Series, 2006
This paper describes the development, implementation, and evaluation of an automated system for predicting the acceptability status of candidate reading-comprehension stimuli extracted from a database of journal and magazine articles. The system uses a combination of classification and regression techniques to predict the probability that a given…
Descriptors: Automation, Prediction, Reading Comprehension, Classification