Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 24 |
Descriptor
Test Validity | 35 |
Writing Tests | 35 |
Test Reliability | 17 |
Computer Assisted Testing | 14 |
English (Second Language) | 13 |
Writing Evaluation | 12 |
Language Tests | 11 |
Foreign Countries | 9 |
Reading Tests | 8 |
Scores | 8 |
Correlation | 7 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Researchers | 2 |
Teachers | 2 |
Policymakers | 1 |
Practitioners | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
New York State Education Department, 2024
The New York State Education Department (NYSED) has a partnership with NWEA for the development of the 2024 Grades 3-8 English Language Arts Tests. Teachers from across the State work with NYSED in a variety of activities to ensure the validity and reliability of the New York State Testing Program (NYSTP). The 2024 Grades 6 and 7 English Language…
Descriptors: Language Tests, Test Format, Language Arts, English Instruction
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Choi, Yun Deok – Language Testing in Asia, 2022
A much-debated question in the L2 assessment field is if computer familiarity should be considered a potential source of construct-irrelevant variance in computer-based writing (CBW) tests. This study aims to make a partial validity argument for an online source-based writing test (OSWT) designed for English placement testing (EPT), focusing on…
Descriptors: Test Validity, Scores, Computer Assisted Testing, English (Second Language)
Yue Huang; Joshua Wilson – Journal of Computer Assisted Learning, 2025
Background: Automated writing evaluation (AWE) systems, used as formative assessment tools in writing classrooms, are promising for enhancing instruction and improving student performance. Although meta-analytic evidence supports AWE's effectiveness in various contexts, research on its effectiveness in the U.S. K-12 setting has lagged behind its…
Descriptors: Writing Evaluation, Writing Skills, Writing Tests, Writing Instruction
Autman, Hamlet; Kelly, Stephanie – Business and Professional Communication Quarterly, 2017
This article contains two measurement development studies on writing apprehension. Study 1 reexamines the validity of the writing apprehension measure based on the finding from prior research that a second false factor was embedded. The findings from Study 1 support the validity of a reduced measure with 6 items versus the original 20-item…
Descriptors: Writing Apprehension, Writing Tests, Test Validity, Test Reliability
Rogers, Christopher M.; Thurlow, Martha L.; Lazarus, Sheryl S.; Liu, Kristin K. – National Center on Educational Outcomes, 2019
The purpose of this report is to present a synthesis of the research on test accommodations published in 2015 and 2016. We summarize the research to review current research trends and enhance understanding of the implications of accommodations use in the development of future policy directions, to highlight implementation of current and new…
Descriptors: Testing Accommodations, Students with Disabilities, Elementary Secondary Education, Postsecondary Education
College Board, 2023
Over the past several years, content experts, psychometricians, and researchers have been hard at work developing, refining, and studying the digital SAT. The work is grounded in foundational best practices and advances in measurement and assessment design, with fairness for students informing all of the work done. This paper shares learnings from…
Descriptors: College Entrance Examinations, Psychometrics, Computer Assisted Testing, Best Practices
Bastianello, Tamara; Brondino, Margherita; Persici, Valentina; Majorano, Marinella – Journal of Research in Childhood Education, 2023
The present contribution aims at presenting an assessment tool (i.e., the TALK-assessment) built to evaluate the language development and school readiness of Italian preschoolers before they enter primary school, and its predictive validity for the children's reading and writing skills at the end of the first year of primary school. The early…
Descriptors: Literacy, Computer Assisted Testing, Italian, Language Acquisition
Kyle, Kristopher; Choe, Ann Tai; Eguchi, Masaki; LaFlair, Geoff; Ziegler, Nicole – ETS Research Report Series, 2021
A key piece of a validity argument for a language assessment tool is clear overlap between assessment tasks and the target language use (TLU) domain (i.e., the domain description inference). The TOEFL 2000 Spoken and Written Academic Language (T2K-SWAL) corpus, which represents a variety of academic registers and disciplines in traditional…
Descriptors: Comparative Analysis, Second Language Learning, English (Second Language), Language Tests
Plakans, Lia; Gebril, Atta; Bilki, Zeynep – Language Testing, 2019
The present study investigates integrated writing assessment performances with regard to the linguistic features of complexity, accuracy, and fluency (CAF). Given the increasing presence of integrated tasks in large-scale and classroom assessments, validity evidence is needed for the claim that their scores reflect targeted language abilities.…
Descriptors: Accuracy, Language Tests, Scores, Writing Evaluation
Barkaoui, Khaled – Language Testing, 2019
This study aimed to examine the sources of variability in the second-language (L2) writing scores of test-takers who repeated an English language proficiency test, the Pearson Test of English (PTE) Academic, multiple times. Examining repeaters' test scores can provide important information concerning factors contributing to "changes" in…
Descriptors: Second Language Learning, Writing Tests, Scores, English (Second Language)
Ling, Guangming – Language Assessment Quarterly, 2017
To investigate whether the type of keyboard used in exams introduces any construct-irrelevant variance to the TOEFL iBT Writing scores, we surveyed 17,040 TOEFL iBT examinees from 24 countries on their keyboard-related perceptions and preferences and analyzed the survey responses together with their test scores. Results suggest that controlling…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Writing Tests
Staples, Shelley; Biber, Douglas; Reppen, Randi – Modern Language Journal, 2018
One of the central considerations in the validity argument for the TOEFL iBT is the relationship between the language on the exam and the language required for university courses. Corpus linguistics has recently been shown to be an effective way to explore this relationship, which can also be considered as an aspect of authenticity. Applying…
Descriptors: Computational Linguistics, Computer Assisted Testing, English (Second Language), Language Tests
Heldsinger, Sandra; Humphry, Stephen – Australian Educational Researcher, 2010
Demands for accountability have seen the implementation of large scale testing programs in Australia and internationally. There is, however, a growing body of evidence to show that externally imposed testing programs do not have a sustained impact on student achievement. It has been argued that teacher assessment is more effective in raising…
Descriptors: Testing Programs, Testing, Academic Achievement, Measures (Individuals)
Olinghouse, Natalie G.; Zheng, Jinjie; Morlock, Larissa – Reading & Writing Quarterly, 2012
This study evaluated large-scale state writing assessments for the inclusion of motivational characteristics in the writing task and written prompt. We identified 6 motivational variables from the authentic activity literature: time allocation, audience specification, audience intimacy, definition of task, allowance for multiple perspectives, and…
Descriptors: Writing Evaluation, Writing Tests, Writing Achievement, Audiences