Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 26 |
Descriptor
Computer Assisted Testing | 30 |
Validity | 30 |
Scoring | 10 |
Reliability | 9 |
Evaluation Methods | 8 |
Essays | 6 |
Measures (Individuals) | 6 |
Student Evaluation | 6 |
Educational Technology | 5 |
Higher Education | 5 |
Psychometrics | 5 |
More ▼ |
Source
Author
Alonzo, Julie | 2 |
Faurer, Judson C. | 2 |
Lai, Cheng-Fei | 2 |
Nese, Joseph F. T. | 2 |
Tindal, Gerald | 2 |
Weigle, Sara Cushing | 2 |
Al-Bahlani, Sara | 1 |
Alegre, Olga M. | 1 |
Allehaiby, Wid Hasen | 1 |
Alloway, Tracy | 1 |
Anderson, Daniel | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 30 |
Journal Articles | 27 |
Numerical/Quantitative Data | 2 |
Opinion Papers | 1 |
Education Level
Higher Education | 9 |
Postsecondary Education | 8 |
Elementary Secondary Education | 6 |
Elementary Education | 5 |
Early Childhood Education | 2 |
Grade 1 | 2 |
Grade 2 | 2 |
Kindergarten | 2 |
Adult Education | 1 |
Grade 10 | 1 |
Grade 3 | 1 |
More ▼ |
Audience
Location
Oregon | 1 |
South Africa | 1 |
Spain | 1 |
Taiwan | 1 |
Texas | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Stanford Achievement Tests | 2 |
Behavior Assessment System… | 1 |
Conners Rating Scales | 1 |
Continuous Performance Test | 1 |
Kaufman Brief Intelligence… | 1 |
Test of English as a Foreign… | 1 |
Texas Essential Knowledge and… | 1 |
What Works Clearinghouse Rating
Goldhammer, Frank; Hahnel, Carolin; Kroehne, Ulf; Zehner, Fabian – Large-scale Assessments in Education, 2021
International large-scale assessments such as PISA or PIAAC have started to provide public or scientific use files for log data; that is, events, event-related attributes and timestamps of test-takers' interactions with the assessment system. Log data and the process indicators derived from it can be used for many purposes. However, the intended…
Descriptors: International Assessment, Data, Computer Assisted Testing, Validity
Dorsey, David W.; Michaels, Hillary R. – Journal of Educational Measurement, 2022
We have dramatically advanced our ability to create rich, complex, and effective assessments across a range of uses through technology advancement. Artificial Intelligence (AI) enabled assessments represent one such area of advancement--one that has captured our collective interest and imagination. Scientists and practitioners within the domains…
Descriptors: Validity, Ethics, Artificial Intelligence, Evaluation Methods
Daniel R. Isbell; Benjamin Kremmel; Jieun Kim – Language Assessment Quarterly, 2023
In the wake of the COVID-19 boom in remote administration of language tests, it appears likely that remote administration will be a permanent fixture in the language testing landscape. Accordingly, language test providers, stakeholders, and researchers must grapple with the implications of remote proctoring on valid, fair, and just uses of tests.…
Descriptors: Distance Education, Supervision, Language Tests, Culture Fair Tests
Kershree Padayachee; M. Matimolane – Teaching in Higher Education, 2025
In the shift to Emergency Remote Teaching and Learning (ERT&L) during the COVID-19 pandemic, remote assessment and feedback became a major source of discontent and challenge for students and staff. This paper is a reflection and analysis of assessment practices during ERT&L, and our theorisation of the possibilities for shifts towards…
Descriptors: Educational Quality, Social Justice, Distance Education, Feedback (Response)
Moncaleano, Sebastian; Russell, Michael – Journal of Applied Testing Technology, 2018
2017 marked a century since the development and administration of the first large-scale group administered standardized test. Since that time, both the importance of testing and the technology of testing have advanced significantly. This paper traces the technological advances that have led to the large-scale administration of educational tests in…
Descriptors: Technological Advancement, Standardized Tests, Computer Assisted Testing, Automation
Allehaiby, Wid Hasen; Al-Bahlani, Sara – Arab World English Journal, 2021
One of the main challenges higher educational institutions encounter amid the recent COVID-19 crisis is transferring assessment approaches from the traditional face-to-face form to the online Emergency Remote Teaching approach. A set of language assessment principles, practicality, reliability, validity, authenticity, and washback, which can be…
Descriptors: Barriers, Distance Education, Evaluation Methods, Teaching Methods
Massey, Chris L.; Gambrell, Linda B. – Literacy Research and Instruction, 2014
Literacy educators and researchers have long recognized the importance of increasing students' writing proficiency across age and grade levels. With the release of the Common Core State Standards (CCSS), a new and greater emphasis is being placed on writing in the K-12 curriculum. Educators, as well as the authors of the CCSS, agree that…
Descriptors: Writing Evaluation, State Standards, Instructional Effectiveness, Writing Ability
Irwin, Brian; Hepplestone, Stuart – Assessment & Evaluation in Higher Education, 2012
There have been calls in the literature for changes to assessment practices in higher education, to increase flexibility and give learners more control over the assessment process. This article explores the possibilities of allowing student choice in the format used to present their work, as a starting point for changing assessment, based on…
Descriptors: Student Evaluation, College Students, Selection, Computer Assisted Testing
Deane, Paul – Assessing Writing, 2013
This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…
Descriptors: Scoring, Essays, Text Structure, Writing (Composition)
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Weigle, Sara Cushing – Assessing Writing, 2013
This article presents considerations for using automated scoring systems to evaluate second language writing. A distinction is made between English language learners in English-medium educational systems and those studying English in their own countries for a variety of purposes, and between learning-to-write and writing-to-learn in a second…
Descriptors: Scoring, Second Language Learning, Second Languages, English Language Learners
Faurer, Judson C. – Contemporary Issues in Education Research, 2013
Are prospective employers getting "quality" educated, degreed applicants and are academic institutions that offer online degree programs ensuring the quality control of the courses/programs offered? The issue specifically addressed in this paper is not with all institutions offering degrees through online programs or even with all online…
Descriptors: Online Courses, Validity, Grades (Scholastic), Quality Control
Brown, Gavin T. L. – Higher Education Quarterly, 2010
The use of timed, essay examinations is a well-established means of evaluating student learning in higher education. The reliability of essay scoring is highly problematic and it appears that essay examination grades are highly dependent on language and organisational components of writing. Computer-assisted scoring of essays makes use of language…
Descriptors: Higher Education, Essay Tests, Validity, Scoring
Weigle, Sara Cushing – Language Testing, 2010
Automated scoring has the potential to dramatically reduce the time and costs associated with the assessment of complex skills such as writing, but its use must be validated against a variety of criteria for it to be accepted by test users and stakeholders. This study approaches validity by comparing human and automated scores on responses to…
Descriptors: Correlation, Validity, Writing Ability, English (Second Language)
Riley, Barth B.; Dennis, Michael L.; Conrad, Kendon J. – Applied Psychological Measurement, 2010
This simulation study sought to compare four different computerized adaptive testing (CAT) content-balancing procedures designed for use in a multidimensional assessment with respect to measurement precision, symptom severity classification, validity of clinical diagnostic recommendations, and sensitivity to atypical responding. The four…
Descriptors: Simulation, Computer Assisted Testing, Adaptive Testing, Comparative Analysis
Previous Page | Next Page ยป
Pages: 1 | 2