Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 3 |
Descriptor
Source
Assessing Writing | 6 |
Author
Bradshaw, William S. | 1 |
Brown, Gavin T. L. | 1 |
Burgin, John | 1 |
Erling, Elizabeth J. | 1 |
Fuite, Jim | 1 |
Gebril, Atta | 1 |
Glasswell, Kath | 1 |
Harland, Don | 1 |
Hughes, Gail D. | 1 |
Reeve, Suzanne | 1 |
Richardson, John T. E. | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 3 |
Reports - Evaluative | 2 |
Reports - Descriptive | 1 |
Education Level
Elementary Education | 2 |
Elementary Secondary Education | 2 |
Higher Education | 2 |
Postsecondary Education | 2 |
Audience
Policymakers | 1 |
Teachers | 1 |
Location
Australia | 1 |
New Zealand | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Erling, Elizabeth J.; Richardson, John T. E. – Assessing Writing, 2010
Measuring the Academic Skills of University Students is a procedure developed in the 1990s at the University of Sydney's Language Centre to identify students in need of academic writing development by assessing examples of their written work against five criteria. This paper reviews the literature relating to the development of the procedure with…
Descriptors: Foreign Countries, Writing Evaluation, Assignments, Psychometrics
Gebril, Atta – Assessing Writing, 2010
Integrated tasks are currently employed in a number of L2 exams since they are perceived as an addition to the writing-only task type. Given this trend, the current study investigates composite score generalizability of both reading-to-write and writing-only tasks. For this purpose, a multivariate generalizability analysis is used to investigate…
Descriptors: Scoring, Scores, Second Language Instruction, Writing Evaluation
Burgin, John; Hughes, Gail D. – Assessing Writing, 2009
The authors explored the credibility of using informal reading inventories and writing samples for 138 students (K-4) to evaluate the effectiveness of a summer literacy program. Running Records (a measure of a child's reading level) and teacher experience during daily reading instruction were used to estimate the reliability of the more formal…
Descriptors: Informal Reading Inventories, Multiple Choice Tests, Program Effectiveness, Scoring
Sudweeks, Richard R.; Reeve, Suzanne; Bradshaw, William S. – Assessing Writing, 2004
A pilot study was conducted to evaluate and improve the rating procedure proposed for use in a research effort designed to assess the essay writing ability of college sophomores. Generalizability theory and the Many-Facet Rasch Model were each used to (a) estimate potential sources of error in the rating, (b) to obtain reliability estimates, and…
Descriptors: Generalizability Theory, College Students, Writing Ability, Writing Evaluation
Slomp, David H.; Fuite, Jim – Assessing Writing, 2004
Specialists in the field of large-scale, high-stakes writing assessment have, over the last forty years alternately discussed the issue of maximizing either reliability or validity in test design. Factors complicating the debate--such as Messick's (1989) expanded definition of validity, and the ethical implications of testing--are explored. An…
Descriptors: Information Theory, Writing Evaluation, Writing Tests, Test Validity
Brown, Gavin T. L.; Glasswell, Kath; Harland, Don – Assessing Writing, 2004
Accuracy in the scoring of writing is critical if standardized tasks are to be used in a national assessment scheme. Three approaches to establishing accuracy (i.e., consensus, consistency, and measurement) exist and commonly large-scale assessment programs of primary school writing demonstrate adjacent agreement consensus rates of between 80% and…
Descriptors: Writing Evaluation, Student Evaluation, Educational Assessment, Writing Tests