Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 3 |
Descriptor
Computer Assisted Testing | 3 |
Correlation | 2 |
English (Second Language) | 2 |
Scoring | 2 |
Statistical Analysis | 2 |
Test Validity | 2 |
Accuracy | 1 |
Asians | 1 |
Automation | 1 |
Case Studies | 1 |
College Students | 1 |
More ▼ |
Author
Carr, Nathan T. | 1 |
Gehsmann, Kristin | 1 |
Hoang, Giang Thi Linh | 1 |
Kunnan, Antony John | 1 |
Spichtig, Alexandra | 1 |
Tousley, Elias | 1 |
Xi, Xiaoming | 1 |
Publication Type
Journal Articles | 3 |
Reports - Research | 2 |
Reports - Descriptive | 1 |
Tests/Questionnaires | 1 |
Education Level
Elementary Education | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
California | 3 |
Vietnam | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Gehsmann, Kristin; Spichtig, Alexandra; Tousley, Elias – Literacy Research: Theory, Method, and Practice, 2017
Assessments of developmental spelling, also called spelling inventories, are commonly used to understand students' orthographic knowledge (i.e., knowledge of how written words work) and to determine their stages of spelling and reading development. The information generated by these assessments is used to inform teachers' grouping practices and…
Descriptors: Spelling, Computer Assisted Testing, Grouping (Instructional Purposes), Teaching Methods
Hoang, Giang Thi Linh; Kunnan, Antony John – Language Assessment Quarterly, 2016
Computer technology made its way into writing instruction and assessment with spelling and grammar checkers decades ago, but more recently it has done so with automated essay evaluation (AEE) and diagnostic feedback. And although many programs and tools have been developed in the last decade, not enough research has been conducted to support or…
Descriptors: Case Studies, Essays, Writing Evaluation, English (Second Language)
Carr, Nathan T.; Xi, Xiaoming – Language Assessment Quarterly, 2010
This article examines how the use of automated scoring procedures for short-answer reading tasks can affect the constructs being assessed. In particular, it highlights ways in which the development of scoring algorithms intended to apply the criteria used by human raters can lead test developers to reexamine and even refine the constructs they…
Descriptors: Scoring, Automation, Reading Tests, Test Format