NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 39 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Autman, Hamlet; Kelly, Stephanie – Business and Professional Communication Quarterly, 2017
This article contains two measurement development studies on writing apprehension. Study 1 reexamines the validity of the writing apprehension measure based on the finding from prior research that a second false factor was embedded. The findings from Study 1 support the validity of a reduced measure with 6 items versus the original 20-item…
Descriptors: Writing Apprehension, Writing Tests, Test Validity, Test Reliability
Soohye Yeom – ProQuest LLC, 2023
With the wide introduction of English-medium instruction (EMI) to higher education institutions throughout East Asian countries, many East Asian universities are using English proficiency tests that were not originally designed for this context to make admissions and placement decisions. To support the use of these tests in this new EMI context,…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Youmi Suk; Peter M. Steiner; Jee-Seon Kim; Hyunseung Kang – Society for Research on Educational Effectiveness, 2021
Background/Context: Regression discontinuity (RD) designs are used for policy and program evaluation where subjects' eligibility into a program or policy is determined by whether an assignment variable (i.e., running variable) exceeds a pre-defined cutoff. Under a standard RD design with a continuous assignment variable, the average treatment…
Descriptors: Educational Policy, Eligibility, Cutting Scores, Testing Accommodations
Peer reviewed Peer reviewed
Direct linkDirect link
Plakans, Lia; Gebril, Atta; Bilki, Zeynep – Language Testing, 2019
The present study investigates integrated writing assessment performances with regard to the linguistic features of complexity, accuracy, and fluency (CAF). Given the increasing presence of integrated tasks in large-scale and classroom assessments, validity evidence is needed for the claim that their scores reflect targeted language abilities.…
Descriptors: Accuracy, Language Tests, Scores, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Barkaoui, Khaled – Language Testing, 2019
This study aimed to examine the sources of variability in the second-language (L2) writing scores of test-takers who repeated an English language proficiency test, the Pearson Test of English (PTE) Academic, multiple times. Examining repeaters' test scores can provide important information concerning factors contributing to "changes" in…
Descriptors: Second Language Learning, Writing Tests, Scores, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Hanif, Maria; Khan, Tamim Ahmed; Masroor, Uzma; Amjad, Amira – Cogent Education, 2017
Achievement test is a mechanism to measure student's knowledge and abilities. Numerous categories of achievement tests have been developed by different scholars and psychologists. Since they do not directly consider curriculum adopted during the course of study of students, they do not reflect truly upon the achievements of students. We propose an…
Descriptors: Achievement Tests, Computer Assisted Testing, Curriculum Based Assessment, Reading Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Llosa, Lorena; Malone, Margaret E. – Language Testing, 2019
Investigating the comparability of students' performance on TOEFL writing tasks and actual academic writing tasks is essential to provide backing for the extrapolation inference in the TOEFL validity argument (Chapelle, Enright, & Jamieson, 2008). This study compared 103 international non-native-English-speaking undergraduate students'…
Descriptors: Computer Assisted Testing, Language Tests, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M. – ETS Research Report Series, 2015
Automated scoring models were trained and evaluated for the essay task in the "Praxis I"® writing test. Prompt-specific and generic "e-rater"® scoring models were built, and evaluation statistics, such as quadratic weighted kappa, Pearson correlation, and standardized differences in mean scores, were examined to evaluate the…
Descriptors: Writing Tests, Licensing Examinations (Professions), Teacher Competency Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Staples, Shelley; Biber, Douglas; Reppen, Randi – Modern Language Journal, 2018
One of the central considerations in the validity argument for the TOEFL iBT is the relationship between the language on the exam and the language required for university courses. Corpus linguistics has recently been shown to be an effective way to explore this relationship, which can also be considered as an aspect of authenticity. Applying…
Descriptors: Computational Linguistics, Computer Assisted Testing, English (Second Language), Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Peer reviewed Peer reviewed
Direct linkDirect link
Martínez-Fernández, J. R.; Corcelles, M.; Bañales, G.; Castelló, M.; Gutiérrez-Braojos, C. – Electronic Journal of Research in Educational Psychology, 2016
Introduction: In this study, the conceptions of learning and writing of a group of undergraduates enrolled in a teacher education programme were identified. The relationship between them were analysed, and a set of patterns of beliefs about learning and writing were defined. Finally, the relation between these patterns and the quality of a text…
Descriptors: Foreign Countries, Undergraduate Students, Preservice Teachers, Teacher Education Programs
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zaidin, M. Arifin – Journal of Education and Practice, 2015
The purpose of this study is to assess the correlation between aspects of tutor and the students' basic writing outcomes of the Elementary School Teacher Education at the Distance Learning Program Unit, Open University of Palu. This is ex post facto correlation with the population research of 387 people and the total sample of 100 people. This…
Descriptors: Tutors, Writing Ability, Correlation, Distance Education
Peer reviewed Peer reviewed
Direct linkDirect link
Zou, Xiao-Ling; Chen, Yan-Min – Technology, Pedagogy and Education, 2016
The effects of computer and paper test media on EFL test-takers with different computer familiarity in writing scores and in the cognitive writing process have been comprehensively explored from the learners' aspect as well as on the basis of related theories and practice. The results indicate significant differences in test scores among the…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Previous Page | Next Page »
Pages: 1  |  2  |  3