Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 18 |
Since 2016 (last 10 years) | 41 |
Since 2006 (last 20 years) | 58 |
Descriptor
Computer Assisted Testing | 80 |
Test Validity | 80 |
Scoring | 67 |
Test Reliability | 37 |
Test Construction | 28 |
Language Tests | 17 |
Correlation | 16 |
Test Items | 16 |
English (Second Language) | 14 |
Second Language Learning | 14 |
Foreign Countries | 13 |
More ▼ |
Source
Author
Bejar, Isaac I. | 3 |
Anna-Maria Fall | 2 |
Beula M. Magimairaj | 2 |
Greg Roberts | 2 |
Philip Capin | 2 |
Rock, Donald A. | 2 |
Ronald B. Gillam | 2 |
Sandra L. Gillam | 2 |
Sharon Vaughn | 2 |
Stricker, Lawrence J. | 2 |
Weiss, David J. | 2 |
More ▼ |
Publication Type
Education Level
Elementary Education | 14 |
Secondary Education | 12 |
Higher Education | 11 |
Junior High Schools | 9 |
Middle Schools | 9 |
Postsecondary Education | 7 |
Grade 5 | 4 |
Grade 8 | 4 |
Elementary Secondary Education | 3 |
Grade 4 | 3 |
Grade 6 | 3 |
More ▼ |
Audience
Administrators | 6 |
Researchers | 2 |
Practitioners | 1 |
Teachers | 1 |
Location
New York | 6 |
Nebraska | 3 |
California | 2 |
Iran | 2 |
Israel | 2 |
United Kingdom | 2 |
Australia | 1 |
Philippines | 1 |
Turkey | 1 |
United States | 1 |
Vietnam | 1 |
More ▼ |
Laws, Policies, & Programs
Family Educational Rights and… | 1 |
Health Insurance Portability… | 1 |
Individuals with Disabilities… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Rafner, Janet; Biskjaer, Michael Mose; Zana, Blanka; Langsford, Steven; Bergenholtz, Carsten; Rahimi, Seyedahmad; Carugati, Andrea; Noy, Lior; Sherson, Jacob – Creativity Research Journal, 2022
Creativity assessments should be valid, reliable, and scalable to support various stakeholders (e.g., policy-makers, educators, corporations, and the general public) in their decision-making processes. Established initiatives toward scalable creativity assessments have relied on well-studied standardized tests. Although robust in many ways, most…
Descriptors: Creativity, Evaluation Methods, Video Games, Computer Assisted Testing
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Lenz, A. Stephen; Ault, Haley; Balkin, Richard S.; Barrio Minton, Casey; Erford, Bradley T.; Hays, Danica G.; Kim, Bryan S. K.; Li, Chi – Measurement and Evaluation in Counseling and Development, 2022
In April 2021, The Association for Assessment and Research in Counseling Executive Council commissioned a time-referenced task group to revise the Responsibilities of Users of Standardized Tests (RUST) Statement (3rd edition) published by the Association for Assessment in Counseling (AAC) in 2003. The task group developed a work plan to implement…
Descriptors: Responsibility, Standardized Tests, Counselor Training, Ethics
Cohen, Yoav; Levi, Effi; Ben-Simon, Anat – Applied Measurement in Education, 2018
In the current study, two pools of 250 essays, all written as a response to the same prompt, were rated by two groups of raters (14 or 15 raters per group), thereby providing an approximation to the essay's true score. An automated essay scoring (AES) system was trained on the datasets and then scored the essays using a cross-validation scheme. By…
Descriptors: Test Validity, Automation, Scoring, Computer Assisted Testing
Latifi, Syed; Gierl, Mark – Language Testing, 2021
An automated essay scoring (AES) program is a software system that uses techniques from corpus and computational linguistics and machine learning to grade essays. In this study, we aimed to describe and evaluate particular language features of Coh-Metrix for a novel AES program that would score junior and senior high school students' essays from…
Descriptors: Writing Evaluation, Computer Assisted Testing, Scoring, Essays
MacMahon, Alyssa Leandra – ProQuest LLC, 2023
Computer-based technological tools can be an efficient and effective way to enhance mathematics classroom activities like formative assessment. Whilst there are a number of theoretical frameworks and standards to help pre- and in-service teachers understand the individual complexities of mathematics pedagogy, technology integration, and formative…
Descriptors: Mathematics Education, Technology Uses in Education, Evaluation Methods, Formative Evaluation
Lottridge, Sue; Burkhardt, Amy; Boyer, Michelle – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Sue Lottridge, Amy Burkhardt, and Dr. Michelle Boyer provide an overview of automated scoring. Automated scoring is the use of computer algorithms to score unconstrained open-ended test items by mimicking human scoring. The use of automated scoring is increasing in educational assessment programs because it allows…
Descriptors: Computer Assisted Testing, Scoring, Automation, Educational Assessment
Selcuk Acar; Denis Dumas; Peter Organisciak; Kelly Berthiaume – Grantee Submission, 2024
Creativity is highly valued in both education and the workforce, but assessing and developing creativity can be difficult without psychometrically robust and affordable tools. The open-ended nature of creativity assessments has made them difficult to score, expensive, often imprecise, and therefore impractical for school- or district-wide use. To…
Descriptors: Thinking Skills, Elementary School Students, Artificial Intelligence, Measurement Techniques
Rupp, André A. – Applied Measurement in Education, 2018
This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the…
Descriptors: Design, Automation, Scoring, Test Scoring Machines
New York State Education Department, 2023
The instructions in this manual explain the responsibilities of school administrators for the New York State Testing Program (NYSTP) Grades 3-8 English Language Arts and Mathematics Tests. School administrators must be thoroughly familiar with the contents of the manual, and the policies and procedures must be followed as written so that testing…
Descriptors: Testing Programs, Mathematics Tests, Test Format, Computer Assisted Testing
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Beula M. Magimairaj; Philip Capin; Sandra L. Gillam; Sharon Vaughn; Greg Roberts; Anna-Maria Fall; Ronald B. Gillam – Grantee Submission, 2022
Purpose: Our aim was to evaluate the psychometric properties of the online administered format of the Test of Narrative Language--Second Edition (TNL-2; Gillam & Pearson, 2017), given the importance of assessing children's narrative ability and considerable absence of psychometric studies of spoken language assessments administered online.…
Descriptors: Computer Assisted Testing, Language Tests, Story Telling, Language Impairments
Mao, Liyang; Liu, Ou Lydia; Roohr, Katrina; Belur, Vinetha; Mulholland, Matthew; Lee, Hee-Sun; Pallant, Amy – Educational Assessment, 2018
Scientific argumentation is one of the core practices for teachers to implement in science classrooms. We developed a computer-based formative assessment to support students' construction and revision of scientific arguments. The assessment is built upon automated scoring of students' arguments and provides feedback to students and teachers.…
Descriptors: Computer Assisted Testing, Science Tests, Scoring, Automation
Doris Zahner; Jeffrey T. Steedle; James Soland; Catherine Welch; Qi Qin; Kathryn Thompson; Richard Phelps – Online Submission, 2023
The "Standards for Educational and Psychological Testing" have served as a cornerstone for best practices in assessment. As the field evolves, so must these standards, with regular revisions ensuring they reflect current knowledge and practice. The National Council on Measurement in Education (NCME) conducted a survey to gather feedback…
Descriptors: Standards, Educational Assessment, Psychological Testing, Best Practices