Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 10 |
Descriptor
Source
Author
Crossley, Scott A. | 10 |
McNamara, Danielle S. | 8 |
Kyle, Kristopher | 3 |
Skalicky, Stephen | 2 |
Allen, David B. | 1 |
Allen, Laura K. | 1 |
Berger, Cynthia M. | 1 |
Dascalu, Mihai | 1 |
Greenfield, Jerry | 1 |
Guo, Liang | 1 |
Jarvis, Scott | 1 |
More ▼ |
Publication Type
Reports - Research | 10 |
Journal Articles | 9 |
Tests/Questionnaires | 2 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 2 |
Adult Education | 1 |
High Schools | 1 |
Postsecondary Education | 1 |
Audience
Location
Georgia (Atlanta) | 1 |
Hong Kong | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 4 |
Dale Chall Readability Formula | 1 |
Flesch Kincaid Grade Level… | 1 |
Flesch Reading Ease Formula | 1 |
What Works Clearinghouse Rating
Kyle, Kristopher; Crossley, Scott A. – Modern Language Journal, 2018
Syntactic complexity is an important measure of second language (L2) writing proficiency (Larsen--Freeman, 1978; Lu, 2011). Large-grained indices such as the mean length of T-unit (MLTU) have been used with the most consistency in L2 writing studies (Ortega, 2003). Recently, indices such as MLTU have been criticized, both for the difficulty in…
Descriptors: Syntax, English (Second Language), Language Tests, Second Language Learning
Kopp, Kristopher J.; Johnson, Amy M.; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2017
An NLP algorithm was developed to assess question quality to inform feedback on questions generated by students within iSTART (an intelligent tutoring system that teaches reading strategies). A corpus of 4575 questions was coded using a four-level taxonomy. NLP indices were calculated for each question and machine learning was used to predict…
Descriptors: Reading Comprehension, Reading Instruction, Intelligent Tutoring Systems, Reading Strategies
Skalicky, Stephen; Berger, Cynthia M.; Crossley, Scott A.; McNamara, Danielle S. – Advances in Language and Literary Studies, 2016
A corpus of 313 freshman college essays was analyzed in order to better understand the forms and functions of humor in academic writing. Human ratings of humor and wordplay were statistically aggregated using Factor Analysis to provide an overall "Humor" component score for each essay in the corpus. In addition, the essays were also…
Descriptors: Discourse Analysis, Academic Discourse, Humor, Writing (Composition)
Crossley, Scott A.; Skalicky, Stephen; Dascalu, Mihai; McNamara, Danielle S.; Kyle, Kristopher – Discourse Processes: A multidisciplinary journal, 2017
Research has identified a number of linguistic features that influence the reading comprehension of young readers; yet, less is known about whether and how these findings extend to adult readers. This study examines text comprehension, processing, and familiarity judgment provided by adult readers using a number of different approaches (i.e.,…
Descriptors: Reading Processes, Reading Comprehension, Readability, Adults
Crossley, Scott A.; Kim, YouJin – Language Assessment Quarterly, 2019
The current study examined the effects of text-based relational (i.e., cohesion), propositional-specific (i.e., lexical), and syntactic features in a source text on subsequent integration of the source text in spoken responses. It further investigated the effects of word integration on human ratings of speaking performance while taking into…
Descriptors: Individual Differences, Syntax, Oral Language, Speech Communication
Crossley, Scott A.; Kyle, Kristopher; Allen, Laura K.; Guo, Liang; McNamara, Danielle S. – Grantee Submission, 2014
This study investigates the potential for linguistic microfeatures related to length, complexity, cohesion, relevance, topic, and rhetorical style to predict L2 writing proficiency. Computational indices were calculated by two automated text analysis tools (Coh- Metrix and the Writing Assessment Tool) and used to predict human essay ratings in a…
Descriptors: Computational Linguistics, Essays, Scoring, Writing Evaluation
Crossley, Scott A.; McNamara, Danielle S. – Journal of Research in Reading, 2012
This study addresses research gaps in predicting second language (L2) writing proficiency using linguistic features. Key to this analysis is the inclusion of linguistic measures at the surface, textbase and situation model level that assess text cohesion and linguistic sophistication. The results of this study demonstrate that five variables…
Descriptors: Writing Instruction, Familiarity, Second Language Learning, Word Frequency
Crossley, Scott A.; Allen, David B.; McNamara, Danielle S. – Reading in a Foreign Language, 2011
Texts are routinely simplified for language learners with authors relying on a variety of approaches and materials to assist them in making the texts more comprehensible. Readability measures are one such tool that authors can use when evaluating text comprehensibility. This study compares the Coh-Metrix Second Language (L2) Reading Index, a…
Descriptors: Readability, Readability Formulas, Word Processing, Psycholinguistics
Crossley, Scott A.; Salsbury, Tom; McNamara, Danielle S.; Jarvis, Scott – Language Testing, 2011
The authors present a model of lexical proficiency based on lexical indices related to vocabulary size, depth of lexical knowledge, and accessibility to core lexical items. The lexical indices used in this study come from the computational tool Coh-Metrix and include word length scores, lexical diversity values, word frequency counts, hypernymy…
Descriptors: Semantics, Familiarity, Second Language Learning, Word Frequency
Crossley, Scott A.; Greenfield, Jerry; McNamara, Danielle S. – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2008
Many programs designed to compute the readability of texts are narrowly based on surface-level linguistic features and take too little account of the processes which a reader brings to the text. This study is an exploratory examination of the use of Coh-Metrix, a computational tool that measures cohesion and text difficulty at various levels of…
Descriptors: Reading Comprehension, Readability, Psycholinguistics, Construct Validity