Publication Date
| In 2026 | 0 |
| Since 2025 | 14 |
| Since 2022 (last 5 years) | 112 |
| Since 2017 (last 10 years) | 254 |
| Since 2007 (last 20 years) | 423 |
Descriptor
| Computer Assisted Testing | 632 |
| Scoring | 511 |
| Test Construction | 120 |
| Test Items | 120 |
| Foreign Countries | 115 |
| Evaluation Methods | 106 |
| Automation | 97 |
| Scoring Rubrics | 96 |
| Essays | 90 |
| Student Evaluation | 90 |
| Scores | 89 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Location
| Australia | 13 |
| China | 12 |
| New York | 9 |
| Japan | 8 |
| Canada | 7 |
| Netherlands | 7 |
| Germany | 6 |
| Iran | 6 |
| Taiwan | 6 |
| United Kingdom | 6 |
| Spain | 5 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedThissen, David; And Others – Journal of Educational Measurement, 1989
An approach to scoring reading comprehension based on the concept of the testlet is described, using models developed for items in multiple categories. The model is illustrated using data from 3,866 examinees. Application of testlet scoring to multiple category models developed for individual items is discussed. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Mathematical Models
Kelly, P. Adam – Journal of Educational Computing Research, 2005
Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of "general" scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same…
Descriptors: Essays, Models, Writing Evaluation, Validity
Stricker, Lawrence J.; Rock, Donald A. – ETS Research Report Series, 2008
This study assessed the invariance in the factor structure of the "Test of English as a Foreign Language"™ Internet-based test (TOEFL® iBT) across subgroups of test takers who differed in native language and exposure to the English language. The subgroups were defined by (a) Indo-European and Non-Indo-European language family, (b)…
Descriptors: Factor Structure, English (Second Language), Language Tests, Computer Assisted Testing
Lunz, Mary E. – 1997
This paper explains the multifacet technology for analyzing performance examinations and the fair average method of setting criterion standards. The multidimensional nature of performance examinations requires that multiple and often different facets elements of a candidate's examination form be accounted for in the analysis. After this is…
Descriptors: Ability, Computer Assisted Testing, Criteria, Educational Technology
Anderson, Paul S. – 1987
A recent innovation in the area of educational measurement is MDT multi-digit testing, a machine-scored near-equivalent to "fill-in-the-blank" testing. The MDT method is based on long lists (or "Answer Banks") that contain up to 1,000 discrete answers, each with a three-digit label. Students taking an MDT multi-digit test mark…
Descriptors: College Students, Computer Assisted Testing, Higher Education, Scoring
Harnisch, Delwyn L.; And Others – 1987
The capabilities and hardware requirements of four microcomputer software packages produced by the Office of Educational Testing, Research and Service at the University of Illinois at Urbana-Champaign are described. These programs are: (1) the Scan-Tron Forms Analysis Package Version 2.0, an interface between an IBM-compatible and a Scan-Tron…
Descriptors: Authoring Aids (Programing), Computer Assisted Testing, Computer Software, Item Banks
Hamovitch, Marc; Van Matre, Nick – 1981
The third in a series on Navy Computer Managed Instruction (CMI), this report describes how the automated scoring of teletypewriting tests affects training in a system for automated performance testing (APT) which was implemented in the teletypewriter (TTY) portion of the Radioman "A" School in San Diego. The system includes a computer-generated…
Descriptors: Computer Assisted Testing, Computer Managed Instruction, Data Processing, Performance Tests
Noonan, John V.; Sarvela, Paul D. – Performance and Instruction, 1988
Identifies a number of practical decisions that must be made when designing and developing criterion referenced tests as part of a larger system of computer assisted instruction. The topics discussed include test construction, test security, item presentation, and response capturing and scoring. (19 references) (CLB)
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Criterion Referenced Tests, Item Banks
Peer reviewedLuecht, Richard M. – Educational and Psychological Measurement, 1987
Test Pac, a test scoring and analysis computer program for moderate-sized sample designs using dichotomous response items, performs comprehensive item analyses and multiple reliability estimates. It also performs single-facet generalizability analysis of variance, single-parameter item response theory analyses, test score reporting, and computer…
Descriptors: Computer Assisted Testing, Computer Software, Computer Software Reviews, Item Analysis
Anderson, Jonathan – Journal of Educational Data Processing, 1973
Describes some recent developments in computer programing taking place in Australia in the field of educational testing. Topics discussed include item banking and computer-assembled tests, test scoring for multiple choice and open-ended tests, item and test analysis, and test reporting. (Author/DN)
Descriptors: Computer Assisted Testing, Computer Oriented Programs, Computers, Educational Testing
Atkinson, George F.; Doadt, Edward – Assessment in Higher Education, 1980
Some perceived difficulties with conventional multiple choice tests are mentioned, and a modified form of examination is proposed. It uses a computer program to award partial marks for partially correct answers, full marks for correct answers, and check for widespread misunderstanding of an item or subject. (MSE)
Descriptors: Achievement Tests, Computer Assisted Testing, Higher Education, Multiple Choice Tests
Peer reviewedHarper, R. – Journal of Computer Assisted Learning, 2003
Discusses multiple choice questions and presents a statistical approach to post-test correction for guessing that can be used in spreadsheets to automate the correction and generate a grade. Topics include the relationship between the learning objectives and multiple-choice assessments; and guessing correction by negative marking. (LRW)
Descriptors: Behavioral Objectives, Computer Assisted Testing, Grades (Scholastic), Guessing (Tests)
Peer reviewedReise, Steven P. – Applied Psychological Measurement, 2001
This book contains a series of research articles about computerized adaptive testing (CAT) written for advanced psychometricians. The book is divided into sections on: (1) item selection and examinee scoring in CAT; (2) examples of CAT applications; (3) item banks; (4) determining model fit; and (5) using testlets in CAT. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Banks
Xi, Xiaoming; Mollaun, Pam – ETS Research Report Series, 2006
This study explores the utility of analytic scoring for the TOEFL® Academic Speaking Test (TAST) in providing useful and reliable diagnostic information in three aspects of candidates' performance: delivery, language use, and topic development. G studies were used to investigate the dependability of the analytic scores, the distinctness of the…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Oral Language
Shermis, Mark D.; DiVesta, Francis J. – Rowman & Littlefield Publishers, Inc., 2011
"Classroom Assessment in Action" clarifies the multi-faceted roles of measurement and assessment and their applications in a classroom setting. Comprehensive in scope, Shermis and Di Vesta explain basic measurement concepts and show students how to interpret the results of standardized tests. From these basic concepts, the authors then…
Descriptors: Student Evaluation, Standardized Tests, Scores, Measurement

Direct link
