Publication Date
| In 2026 | 0 |
| Since 2025 | 9 |
| Since 2022 (last 5 years) | 93 |
| Since 2017 (last 10 years) | 214 |
| Since 2007 (last 20 years) | 347 |
Descriptor
| Computer Assisted Testing | 510 |
| Scoring | 510 |
| Test Items | 111 |
| Test Construction | 102 |
| Automation | 92 |
| Essays | 82 |
| Foreign Countries | 80 |
| Scores | 79 |
| Adaptive Testing | 78 |
| Evaluation Methods | 77 |
| Computer Software | 75 |
| More ▼ | |
Source
Author
| Bennett, Randy Elliot | 11 |
| Attali, Yigal | 9 |
| Anderson, Paul S. | 7 |
| Williamson, David M. | 6 |
| Bejar, Isaac I. | 5 |
| Ramineni, Chaitanya | 5 |
| Stocking, Martha L. | 5 |
| Xi, Xiaoming | 5 |
| Zechner, Klaus | 5 |
| Bridgeman, Brent | 4 |
| Davey, Tim | 4 |
| More ▼ | |
Publication Type
Education Level
Location
| Australia | 10 |
| China | 10 |
| New York | 9 |
| Japan | 7 |
| Netherlands | 6 |
| Canada | 5 |
| Germany | 5 |
| Iran | 4 |
| Taiwan | 4 |
| United Kingdom | 4 |
| United Kingdom (England) | 4 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David M. – ETS Research Report Series, 2008
This report presents the results of a research and development effort for SpeechRater? Version 1.0 (v1.0), an automated scoring system for the spontaneous speech of English language learners used operationally in the Test of English as a Foreign Language™ (TOEFL®) Practice Online assessment (TPO). The report includes a summary of the validity…
Descriptors: Speech, Scoring, Scoring Rubrics, Scoring Formulas
Hung, Pi-Hsia; Lin, Yu-Fen; Hwang, Gwo-Jen – Educational Technology & Society, 2010
Ubiquitous computing and mobile technologies provide a new perspective for designing innovative outdoor learning experiences. The purpose of this study is to propose a formative assessment design for integrating PDAs into ecology observations. Three learning activities were conducted in this study. An action research approach was applied to…
Descriptors: Foreign Countries, Feedback (Response), Action Research, Observation
Kump, Ann – 1992
Directions are given for scoring typing tests taken on a typewriter or on a computer using special software. The speed score (gross words per minute) is obtained by determining the total number of strokes typed, and dividing by 25. The accuracy score is obtained by comparing the examinee's test paper to the appropriate scoring key and counting the…
Descriptors: Computer Assisted Testing, Employment Qualifications, Guidelines, Job Applicants
Chung, Gregory K. W. K.; O'Neil, Harold F., Jr. – 1997
This report examines the feasibility of scoring essays using computer-based techniques. Essays have been incorporated into many of the standardized testing programs. Issues of validity and reliability must be addressed to deploy automated approaches to scoring fully. Two approaches that have been used to classify documents, surface- and word-based…
Descriptors: Automation, Computer Assisted Testing, Essays, Scoring
Peer reviewedStocking, Martha L. – Journal of Educational and Behavioral Statistics, 1996
An alternative method for scoring adaptive tests, based on number-correct scores, is explored and compared with a method that relies more directly on item response theory. Using the number-correct score with necessary adjustment for intentional differences in adaptive test difficulty is a statistically viable scoring method. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Item Response Theory
Peer reviewedWilliamson, David M.; Bejar, Isaac I.; Hone, Anne S. – Journal of Educational Measurement, 1999
Contrasts "mental models" used by automated scoring for the simulation division of the computerized Architect Registration Examination with those used by experienced human graders for 3,613 candidate solutions. Discusses differences in the models used and the potential of automated scoring to enhance the validity evidence of scores. (SLD)
Descriptors: Architects, Comparative Analysis, Computer Assisted Testing, Judges
Peer reviewedBennett, Randy Elliot; Morley, Mary; Quardt, Dennis – Applied Psychological Measurement, 2000
Describes three open-ended response types that could broaden the conception of mathematical problem solving used in computerized admissions tests: (1) mathematical expression (ME); (2) generating examples (GE); and (3) and graphical modeling (GM). Illustrates how combining ME, GE, and GM can form extended constructed response problems. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Constructed Response, Mathematics Tests
Peer reviewedMcHenry, Bill; Griffith, Leonard; McHenry, Jim – T.H.E. Journal, 2004
Imagine administering an online standardized test to an entire class of 11th-grade students when, halfway through the exam, the server holding the test hits a snag and throws everyone offline. Imagine another scenario in which an elementary school has very few computers so teachers must bus their students to the local high school for a timed test.…
Descriptors: Computer Assisted Testing, Risk, Evaluation Methods, Federal Legislation
Hu, Xiangen, Ed.; Barnes, Tiffany, Ed.; Hershkovitz, Arnon, Ed.; Paquette, Luc, Ed. – International Educational Data Mining Society, 2017
The 10th International Conference on Educational Data Mining (EDM 2017) is held under the auspices of the International Educational Data Mining Society at the Optics Velley Kingdom Plaza Hotel, Wuhan, Hubei Province, in China. This years conference features two invited talks by: Dr. Jie Tang, Associate Professor with the Department of Computer…
Descriptors: Data Analysis, Data Collection, Graphs, Data Use
Johnson, Martin; Nadas, Rita – Learning, Media and Technology, 2009
Within large scale educational assessment agencies in the UK, there has been a shift towards assessors marking digitally scanned copies rather than the original paper scripts that were traditionally used. This project uses extended essay examination scripts to consider whether the mode in which an essay is read potentially influences the…
Descriptors: Reading Comprehension, Educational Assessment, Internet, Essay Tests
Kaplan, Randy M.; Bennett, Randy Elliot – 1994
This study explores the potential for using a computer-based scoring procedure for the formulating-hypotheses (F-H) item. This item type presents a situation and asks the examinee to generate explanations for it. Each explanation is judged right or wrong, and the number of creditable explanations is summed to produce an item score. Scores were…
Descriptors: Automation, Computer Assisted Testing, Correlation, Higher Education
Ho, James K. – Collegiate Microcomputer, 1987
Explains how spreadsheet software can be used in the design and grading of academic tests and in assigning grades. Macro programs and menu-driven software are highlighted and an example using IBM PCs and Lotus 1-2-3 software is given. (Author/LRW)
Descriptors: Computer Assisted Testing, Data Processing, Grading, Menu Driven Software
Shoemaker, Judith S.; St. John, Elizabeth A. – Technological Horizons in Education, 1985
The University of California (Irvine) has automated its scoring of placement tests to incoming freshmen and women with specialized administrative software that interfaces a microcomputer with an optical card reader. The university's placement testing program, the computerized system, and advantages of the system are explained. (JN)
Descriptors: Computer Assisted Testing, Computer Oriented Programs, Computer Software, Educational Administration
PDF pending restorationGreen, Bert F. – 2002
Maximum likelihood and Bayesian estimates of proficiency, typically used in adaptive testing, use item weights that depend on test taker proficiency to estimate test taker proficiency. In this study, several methods were explored through computer simulation using fixed item weights, which depend mainly on the items difficulty. The simpler scores…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Computer Simulation
Peer reviewedThissen, David; And Others – Journal of Educational Measurement, 1989
An approach to scoring reading comprehension based on the concept of the testlet is described, using models developed for items in multiple categories. The model is illustrated using data from 3,866 examinees. Application of testlet scoring to multiple category models developed for individual items is discussed. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Mathematical Models

Direct link
