Publication Date
| In 2026 | 0 |
| Since 2025 | 17 |
| Since 2022 (last 5 years) | 115 |
| Since 2017 (last 10 years) | 257 |
| Since 2007 (last 20 years) | 426 |
Descriptor
| Computer Assisted Testing | 635 |
| Scoring | 514 |
| Test Construction | 120 |
| Test Items | 120 |
| Foreign Countries | 115 |
| Evaluation Methods | 106 |
| Automation | 100 |
| Scoring Rubrics | 97 |
| Essays | 90 |
| Student Evaluation | 90 |
| Scores | 89 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Location
| Australia | 13 |
| China | 12 |
| New York | 9 |
| Japan | 8 |
| Canada | 7 |
| Netherlands | 7 |
| Germany | 6 |
| Iran | 6 |
| Taiwan | 6 |
| United Kingdom | 6 |
| Spain | 5 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Mitchell, Alison M.; Truckenmiller, Adrea; Petscher, Yaacov – Communique, 2015
As part of the Race to the Top initiative, the United States Department of Education made nearly 1 billion dollars available in State Educational Technology grants with the goal of ramping up school technology. One result of this effort is that states, districts, and schools across the country are using computerized assessments to measure their…
Descriptors: Computer Assisted Testing, Educational Technology, Testing, Efficiency
Elliott, Victoria – Changing English: Studies in Culture and Education, 2014
Automated essay scoring programs are becoming more common and more technically advanced. They provoke strong reactions from both their advocates and their detractors. Arguments tend to fall into two categories: technical and principled. This paper argues that since technical difficulties will be overcome with time, the debate ought to be held in…
Descriptors: English, English Instruction, Grading, Computer Assisted Testing
Ling, Guangming; Mollaun, Pamela; Xi, Xiaoming – Language Testing, 2014
The scoring of constructed responses may introduce construct-irrelevant factors to a test score and affect its validity and fairness. Fatigue is one of the factors that could negatively affect human performance in general, yet little is known about its effects on a human rater's scoring quality on constructed responses. In this study, we compared…
Descriptors: Evaluators, Fatigue (Biology), Scoring, Performance
Liu, Sha; Kunnan, Antony John – CALICO Journal, 2016
This study investigated the application of "WriteToLearn" on Chinese undergraduate English majors' essays in terms of its scoring ability and the accuracy of its error feedback. Participants were 163 second-year English majors from a university located in Sichuan province who wrote 326 essays from two writing prompts. Each paper was…
Descriptors: Foreign Countries, Undergraduate Students, English (Second Language), Second Language Learning
Makransky, Guido; Mortensen, Erik Lykke; Glas, Cees A. W. – Assessment, 2013
Narrowly defined personality facet scores are commonly reported and used for making decisions in clinical and organizational settings. Although these facets are typically related, scoring is usually carried out for a single facet at a time. This method can be ineffective and time consuming when personality tests contain many highly correlated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Personality Measures, Accuracy
Hixson, Nate; Rhudy, Vaughn – West Virginia Department of Education, 2012
To provide an opportunity for teachers to better understand the automated scoring process used by the state of West Virginia on our annual West Virginia Educational Standards Test 2 (WESTEST 2) Online Writing Assessment, the West Virginia Department of Education (WVDE) Office of Assessment and Accountability and the Office of Research conduct an…
Descriptors: Writing Tests, Computer Assisted Testing, Automation, Scoring
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Wang, Xinrui – ProQuest LLC, 2013
The computer-adaptive multistage testing (ca-MST) has been developed as an alternative to computerized adaptive testing (CAT), and been increasingly adopted in large-scale assessments. Current research and practice only focus on ca-MST panels for credentialing purposes. The ca-MST test mode, therefore, is designed to gauge a single scale. The…
Descriptors: Computer Assisted Testing, Adaptive Testing, Diagnostic Tests, Comparative Analysis
Litherland, Kate; Carmichael, Patrick; Martínez-García, Agustina – Accounting Education, 2013
This summary reports on a pilot of a novel, ontology-based e-assessment system in accounting. The system, OeLe, uses emerging semantic technologies to offer an online assessment environment capable of marking students' free text answers to questions of a conceptual nature. It does this by matching their response with a "concept map" or…
Descriptors: Accounting, Pilot Projects, Student Evaluation, Evaluation Methods
Liao, Chen-Huei; Kuo, Bor-Chen; Pai, Kai-Chih – Turkish Online Journal of Educational Technology - TOJET, 2012
Automated scoring by means of Latent Semantic Analysis (LSA) has been introduced lately to improve the traditional human scoring system. The purposes of the present study were to develop a LSA-based assessment system to evaluate children's Chinese sentence construction skills and to examine the effectiveness of LSA-based automated scoring function…
Descriptors: Foreign Countries, Program Effectiveness, Scoring, Personality
Hadi-Tabassum, Samina – Phi Delta Kappan, 2014
Schools are scrambling to prepare students for the writing assessments aligned to the Common Core State Standards. In some states, writing has not been assessed for over a decade. Yet, with the use of computerized grading of the student's writing, many teachers are wondering how to best prepare students for the writing assessments that will…
Descriptors: Computer Assisted Testing, Writing Tests, Standardized Tests, Core Curriculum
Irwin, Brian; Hepplestone, Stuart – Assessment & Evaluation in Higher Education, 2012
There have been calls in the literature for changes to assessment practices in higher education, to increase flexibility and give learners more control over the assessment process. This article explores the possibilities of allowing student choice in the format used to present their work, as a starting point for changing assessment, based on…
Descriptors: Student Evaluation, College Students, Selection, Computer Assisted Testing
Harik, Polina; Baldwin, Peter; Clauser, Brian – Applied Psychological Measurement, 2013
Growing reliance on complex constructed response items has generated considerable interest in automated scoring solutions. Many of these solutions are described in the literature; however, relatively few studies have been published that "compare" automated scoring strategies. Here, comparisons are made among five strategies for…
Descriptors: Computer Assisted Testing, Automation, Scoring, Comparative Analysis
Madnani, Nitin; Burstein, Jill; Sabatini, John; O'Reilly, Tenaha – Grantee Submission, 2013
We introduce a cognitive framework for measuring reading comprehension that includes the use of novel summary-writing tasks. We derive NLP features from the holistic rubric used to score the summaries written by students for such tasks and use them to design a preliminary, automated scoring system. Our results show that the automated approach…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Reading Comprehension
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)

Direct link
Peer reviewed
