NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Individuals with Disabilities…1
What Works Clearinghouse Rating
Showing 1 to 15 of 93 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hakyung Sung; Sooyeon Cho; Kristopher Kyle – Language Assessment Quarterly, 2024
Lexical diversity (LD) is an important indicator of second language lexical development. Much research has investigated LD indices, with a focus on learners of English. However, further research is needed in languages that are typologically distinct from English, such as Korean. In this study, we evaluated the reliability and validity of LD…
Descriptors: Second Language Learning, Korean, Persuasive Discourse, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Beseiso, Majdi; Alzubi, Omar A.; Rashaideh, Hasan – Journal of Computing in Higher Education, 2021
E-learning is gradually gaining prominence in higher education, with universities enlarging provision and more students getting enrolled. The effectiveness of automated essay scoring (AES) is thus holding a strong appeal to universities for managing an increasing learning interest and reducing costs associated with human raters. The growth in…
Descriptors: Automation, Scoring, Essays, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yue Huang; Joshua Wilson – Journal of Computer Assisted Learning, 2025
Background: Automated writing evaluation (AWE) systems, used as formative assessment tools in writing classrooms, are promising for enhancing instruction and improving student performance. Although meta-analytic evidence supports AWE's effectiveness in various contexts, research on its effectiveness in the U.S. K-12 setting has lagged behind its…
Descriptors: Writing Evaluation, Writing Skills, Writing Tests, Writing Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Stuart, N. J.; Barnett, A. L. – British Journal of Special Education, 2023
Students in higher education (HE) are required to complete a variety of writing tasks for coursework and examinations. However, for some students writing presents a major challenge. In the UK, the availability of tools for specialist assessors to help identify difficulties with the quality of written composition is limited. The aim of this study…
Descriptors: Writing Skills, Writing Difficulties, Writing (Composition), Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Yu-Tzu Chang; Ann Tai Choe; Daniel Holden; Daniel R. Isbell – Language Testing, 2024
In this Brief Report, we describe an evaluation of and revisions to a rubric adapted from the Jacobs et al.'s (1981) ESL COMPOSITION PROFILE, with four rubric categories and 20-point rating scales, in the context of an intensive English program writing placement test. Analysis of 4 years of rating data (2016-2021, including 434 essays) using…
Descriptors: Language Tests, Rating Scales, Second Language Learning, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abdullah Alshakhi – Educational Process: International Journal, 2025
Background/purpose: Writing is an essential skill for EFL learners, and the development of flawless writing proficiency in English is the intended learner outcome of every writing course in an EFL program. Development of flawless writing skills involves perfection in orthography (spelling, punctuation, capitalization), grammaticality and syntax,…
Descriptors: Foreign Countries, Language Teachers, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deniz, Kaan Zulfikar; Ilican, Emel – International Journal of Assessment Tools in Education, 2021
This study aims to compare the G and Phi coefficients as estimated by D studies for a measurement tool with the G and Phi coefficients obtained from real cases in which items of differing difficulty levels were added and also to determine the conditions under which the D studies estimated reliability coefficients closer to reality. The study group…
Descriptors: Generalizability Theory, Test Items, Difficulty Level, Test Reliability
Phuong Thi Tuyet Nguyen – ProQuest LLC, 2021
Source-based writing, in which writers read or listen to academic content before writing, has been considered to better assess academic writing skills than independent writing tasks (Read, 1990; Weigle, 2004). Because scores resulting from ratings of test takers' source-based writing task responses are treated as indicators of their academic…
Descriptors: Test Reliability, Test Validity, Writing Tests, Academic Language
Peer reviewed Peer reviewed
Direct linkDirect link
Lestari, Santi B.; Brunfaut, Tineke – Language Testing, 2023
Assessing integrated reading-into-writing task performances is known to be challenging, and analytic rating scales have been found to better facilitate the scoring of these performances than other common types of rating scales. However, little is known about how specific operationalizations of the reading-into-writing construct in analytic rating…
Descriptors: Reading Writing Relationship, Writing Tests, Rating Scales, Writing Processes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Huebner, Alan; Skar, Gustaf B. – Practical Assessment, Research & Evaluation, 2021
Writing assessments often consist of students responding to multiple prompts, which are judged by more than one rater. To establish the reliability of these assessments, there exist different methods to disentangle variation due to prompts and raters, including classical test theory, Many Facet Rasch Measurement (MFRM), and Generalizability Theory…
Descriptors: Error of Measurement, Test Theory, Generalizability Theory, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Merchant, Stefan; Rich, Jessica; Klinger, Don A. – Canadian Journal of Educational Administration and Policy, 2022
Both school and district administrators use the results of standardized, large-scale tests to inform decisions about the need for, or success of, educational programs and interventions. However, test results at the school level are subject to random fluctuations due to changes in cohort, test items, and other factors outside of the school's…
Descriptors: Standardized Tests, Foreign Countries, Generalizability Theory, Scores
Smith, R. Alex; Lembke, Erica S. – Assessment for Effective Intervention, 2021
This study examined the technical adequacy of Picture Word, a type of Writing Curriculum-Based Measurement, with 73 English learners (ELs) with beginning to intermediate English language proficiency in Grades 1, 2, and 3. The ELs in this study attended schools in one midwestern U.S. school district employing an English-only model of instruction…
Descriptors: Writing Tests, English Language Learners, Elementary School Students, Grade 1
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A. – Journal of Educational Measurement, 2019
Numerous researchers have proposed methods for evaluating the quality of rater-mediated assessments using nonparametric methods (e.g., kappa coefficients) and parametric methods (e.g., the many-facet Rasch model). Generally speaking, popular nonparametric methods for evaluating rating quality are not based on a particular measurement theory. On…
Descriptors: Nonparametric Statistics, Test Validity, Test Reliability, Item Response Theory
Gary A. Troia; Frank R. Lawrence; Julie S. Brehmer; Kaitlin Glause; Heather L. Reichmuth – Grantee Submission, 2023
Much of the research that has examined the writing knowledge of school-age students has relied on interviews to ascertain this information, which is problematic because interviews may underestimate breadth and depth of writing knowledge, require lengthy interactions with participants, and do not permit a direct evaluation of a prescribed array of…
Descriptors: Writing Tests, Writing Evaluation, Knowledge Level, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Allen, Abigail A.; Jung, Pyung-Gang; Poch, Apryl L.; Brandes, Dana; Shin, Jaehyun; Lembke, Erica S.; McMaster, Kristen L. – Reading & Writing Quarterly, 2020
The purpose of this study was to investigate evidence of reliability, criterion validity, and grade-level differences of curriculum-based measures of writing (CBM-W) with 612 students in grades 1-3. Four scoring procedures (words written, words spelled correctly, correct word sequences, and correct minus incorrect word sequences) were used with…
Descriptors: Curriculum Based Assessment, Writing Tests, Test Reliability, Test Validity
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7