Publication Date
| In 2026 | 0 |
| Since 2025 | 30 |
| Since 2022 (last 5 years) | 187 |
| Since 2017 (last 10 years) | 536 |
| Since 2007 (last 20 years) | 1330 |
Descriptor
Source
Author
| Deane, Paul | 12 |
| Engelhard, George, Jr. | 11 |
| Graham, Steve | 11 |
| Lee, Yong-Won | 11 |
| Attali, Yigal | 9 |
| Bridgeman, Brent | 9 |
| Powers, Donald E. | 9 |
| Kantor, Robert | 8 |
| McMaster, Kristen L. | 8 |
| Thurlow, Martha L. | 8 |
| Wind, Stefanie A. | 8 |
| More ▼ | |
Publication Type
Education Level
Audience
| Practitioners | 130 |
| Teachers | 96 |
| Policymakers | 49 |
| Administrators | 22 |
| Students | 13 |
| Researchers | 12 |
| Parents | 4 |
| Counselors | 2 |
Location
| Canada | 53 |
| Iran | 52 |
| China | 38 |
| California | 34 |
| Texas | 31 |
| Florida | 26 |
| Australia | 25 |
| Georgia | 25 |
| Indonesia | 25 |
| Saudi Arabia | 25 |
| Turkey | 22 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 2 |
| Does not meet standards | 3 |
Breland, Hunter M.; Kubota, Melvin Y.; Bonner, Marilyn W. – College Entrance Examination Board, 1999
This study examined the SAT® II: Writing Subject Test as a predictor of writing performance in college English courses. Writing performance was based on eight writing samples submitted as part of regular course work by students in eight colleges. The samples consisted of drafts and final papers submitted in response to four take-home writing…
Descriptors: College Entrance Examinations, Writing Tests, College English, Predictive Validity
Camara, Wayne J. – College Entrance Examination Board, 2003
Previous research on differences in the reliability, validity, and difficulty of essay tests given under different timing conditions has indicated that giving examinees more time to complete an essay may raise their scores to a certain extent, but does not change the meaning of those scores, or the rank ordering of students. There is no evidence…
Descriptors: Essays, Comparative Analysis, Writing Tests, Timed Tests
Peer reviewedParker, Richard; And Others – Exceptionality: A Research Journal, 1991
Five countable indices of writing quality were examined for their suitability in making special education screening-eligibility decisions, based on writing samples from 2,160 students. Even the best index was only moderately efficient. "Percentage of words spelled correctly" was best overall, and "percentage of correct word sequences" was best for…
Descriptors: Decision Making, Elementary Secondary Education, Eligibility, Evaluation Methods
Popham, W. James – School Administrator, 2000
Certain that students' average standardized test scores did not reflect their faculty's teaching effectiveness, Department of Defense dependent schools in Wuerzburg, Germany, mounted a districtwide experiment involving performance tests and rubrics in writing and science. Teachers documented gains in students' high-level writing and science skill…
Descriptors: Achievement Gains, Dependents Schools, Elementary Secondary Education, Foreign Countries
Puma, Michael; Tarkow, Allison; Puma, Anna – Grantee Submission, 2007
Background: This study evaluated the impact on student's writing ability of a structured writing program, called "Writing Wings," for 3rd, 4th, and 5th graders developed by the Success For All Foundation (SFAF). Writing is a critical skill for success in school. Purpose: The study was intended to answer one confirmatory question,…
Descriptors: Control Groups, Research Design, Childrens Writing, Writing Attitudes
Norris, Dwayne; Oppler, Scott; Kuang, Daniel; Day, Rachel; Adams, Kimberly – College Board, 2006
This study assessed the predictive and incremental validity of a prototype version of the forthcoming SAT® writing section that was administered to a sample of incoming students at 13 colleges and universities. For these participants, SAT scores, high school GPA, and first-year grades also were obtained. Using these data, analyses were conducted…
Descriptors: College Entrance Examinations, Writing Tests, Predictive Validity, Test Validity
Mendez, Gilbert – Multicultural Perspectives, 2006
This article discusses an approach to teaching used at Calexico Unified School District, a California-Mexican border high school, by a group of teachers working to make teaching and learning more relevant to Chicano and Mexican students' lives and to improve their academic achievement in writing. An off-shoot of a training program for English…
Descriptors: Writing Evaluation, Language Arts, Writing Tests, Academic Achievement
Armstrong, William B. – 1994
As part of an effort to statistically validate the placement tests used in California's San Diego Community College District (SDCCD) a study was undertaken to review the criteria- and content-related validity of the Assessment and Placement Services (APS) reading and writing tests. Evidence of criteria and content validity was gathered from…
Descriptors: Academic Achievement, College English, Community Colleges, Correlation
Maryland State Dept. of Education. Baltimore. Div. of Planning, Results and Information Management. – 1995
This score interpretation guide is designed to help administrators and classroom teachers understand the scores and scales of the 1994 and Beyond Maryland School Performance Assessment Program (MSPAP). MSPAP scale scores indicate a school's level of performance in the content areas of reading, writing, language usage, mathematics, science, and…
Descriptors: Achievement Tests, Elementary Secondary Education, Language Usage, Mathematics Tests
Jett, Daniel L.; Schafer, William D. – 1993
A study examined Maryland high school teachers' attitudes toward specific individual characteristics of the Maryland Writing Test as well as their overall attitudes toward the test. Subjects, 538 English/language arts, mathematics, science, and social studies teachers in Maryland's 176 public high schools, responded to a survey designed to elicit…
Descriptors: English Teachers, High Schools, High Stakes Tests, Performance Based Assessment
PDF pending restorationAustin Independent School District, TX. Office of Program Evaluation. – 1998
The Texas Assessment of Academic Skills (TAAS) is a state-mandated, criterion-referenced or mastery test that has been administered since the 1990-91 school year. The TAAS measures student mastery of the statewide curriculum in reading and mathematics at grades 3 through 8 and at the exit level, and in writing at grades 4 and 8 and at the exit…
Descriptors: Academic Achievement, Accountability, Achievement Gains, Achievement Tests
Educational Testing Service, Princeton, NJ. Test Collection. – 1992
The 161 tests cited in this bibliography are for all age and grade levels. Many of the instruments are test batteries in which writing skills are measured in a subtest. The tests may contain objective, multiple-choice or essay-type questions. The tests included here are in English and for native-speakers. This document is one in a series of…
Descriptors: Achievement Tests, Annotated Bibliographies, Basic Skills, College Entrance Examinations
Haberman, Shelby J. – ETS Research Report Series, 2004
Statistical and measurement properties are examined for features used in essay assessment to determine the generalizability of the features across populations, prompts, and individuals. Data are employed from TOEFL® and GMAT® examinations and from writing for Criterion?.
Descriptors: Language Tests, English (Second Language), Second Language Learning, Business Administration Education
Djiwandono, M. Soenardi – 1990
In Indonesia, Bahasa Indonesian (BI) is the designated national and official language. However, deficiencies in Indonesian proficiency are found in a wide range of individuals. A test battery to measure proficiency level was developed, consisting of a writing test, a grammar test, and a cloze test. The writing test was an essay, in which five…
Descriptors: Cloze Procedure, College Faculty, Comparative Analysis, Foreign Countries
Yen, Shu Jing; Ochieng, Charles; Michaels, Hillary; Friedman, Greg – Online Submission, 2005
Year-to-year rater variation may result in constructed response (CR) parameter changes, making CR items inappropriate to use in anchor sets for linking or equating. This study demonstrates how rater severity affected the writing and reading scores. Rater adjustments were made to statewide results using an item response theory (IRT) methodology…
Descriptors: Test Items, Writing Tests, Reading Tests, Measures (Individuals)

Direct link
