Publication Date
| In 2026 | 0 |
| Since 2025 | 14 |
| Since 2022 (last 5 years) | 112 |
| Since 2017 (last 10 years) | 254 |
| Since 2007 (last 20 years) | 423 |
Descriptor
| Computer Assisted Testing | 632 |
| Scoring | 511 |
| Test Construction | 120 |
| Test Items | 120 |
| Foreign Countries | 115 |
| Evaluation Methods | 106 |
| Automation | 97 |
| Scoring Rubrics | 96 |
| Essays | 90 |
| Student Evaluation | 90 |
| Scores | 89 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Location
| Australia | 13 |
| China | 12 |
| New York | 9 |
| Japan | 8 |
| Canada | 7 |
| Netherlands | 7 |
| Germany | 6 |
| Iran | 6 |
| Taiwan | 6 |
| United Kingdom | 6 |
| Spain | 5 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ebmeier, Howard; Ng, Jennifer – Journal of Personnel Evaluation in Education, 2005
Employment interviews are widely used in the selection of quality teachers, and indeed research confirms administrators' belief in the validity of the procedure. However, many key recommendations for improving the general reliability of interviews including selecting questions that are job-related and research grounded, including well designed…
Descriptors: Field Tests, Teacher Selection, Urban Schools, Employment Interviews
Bennett, Randy Elliot; Persky, Hilary; Weiss, Andrew R.; Jenkins, Frank – National Center for Education Statistics, 2007
The Problem Solving in Technology-Rich Environments (TRE) study was designed to demonstrate and explore innovative use of computers for developing, administering, scoring, and analyzing the results of National Assessment of Educational Progress (NAEP) assessments. Two scenarios (Search and Simulation) were created for measuring problem solving…
Descriptors: Computer Assisted Testing, National Competency Tests, Problem Solving, Simulation
Liu, Xiufeng – 1994
Problems of validity and reliability of concept mapping are addressed by using item-response theory (IRT) models for scoring. In this study, the overall structure of students' concept maps are defined by the number of links, the number of hierarchies, the number of cross-links, and the number of examples. The study was conducted with 92 students…
Descriptors: Alternative Assessment, Computer Assisted Testing, Concept Mapping, Correlation
Peer reviewedHambleton, Ronald K; Rogers, H. Jane – Evaluation and the Health Professions, 1986
Technical advances of the last 15 years in measurement theory and practice are described, notably criterion-referenced testing, item response theory, and computers and testing. Several remaining problems concerning the development and validation of credentialing examinations are also considered. (Author/LMO)
Descriptors: Certification, Computer Assisted Testing, Credentials, Criterion Referenced Tests
Peer reviewedKolstad, Rosemarie K.; And Others – Education, 1984
Provides guidelines for teachers writing machine-scored examinations. Explains the use of item analysis (discrimination index) to single test items that should be improved or eliminated. Discusses validity and reliability of classroom achievement tests in contrast to norm-referenced examinations. (JHZ)
Descriptors: Achievement Tests, Computer Assisted Testing, Criterion Referenced Tests, Item Analysis
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Hicks, Marilyn M. – 1989
Methods of computerized adaptive testing using conventional scoring methods in order to develop a computerized placement test for the Test of English as a Foreign Language (TOEFL) were studied. As a consequence of simulation studies during the first phase of the study, the multilevel testing paradigm was adopted to produce three test levels…
Descriptors: Adaptive Testing, Adults, Algorithms, Computer Assisted Testing
Choren, Ricardo; Blois, Marcelo; Fuks, Hugo – 1998
In 1997, the Software Engineering Laboratory at Pontifical Catholic University of Rio de Janeiro (Brazil) implemented the first version of AulaNet (TM) a World Wide Web-based educational environment. Some of the teaching staff will use this environment in 1998 to offer regular term disciplines through the Web. This paper introduces Quest, a tool…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Computer Interfaces, Computer Software Development
Marsh, Larry P.; Anderson, Paul S. – 1989
This manual is intended to provide Illinois school districts with guidance in developing learning assessment plans and using testing materials as part of the school improvement program mandated by the 1985 educational reform legislation for Illinois. Designed primarily to be a series of "how to" booklets, the manual is divided into six…
Descriptors: Computer Assisted Testing, Computer Software, Computer System Design, Educational Improvement
Cason, Gerald J.; And Others – 1987
The Objective Test Scoring and Performance Rating (OTS-PR) system is a fully integrated set of 70 modular FORTRAN programs run on a VAX-8530 computer. Even with no knowledge of computers, the user can implement OTS-PR to score multiple-choice tests, include scores from external sources such as hand-scored essays or scores from nationally…
Descriptors: Clinical Experience, Computer Assisted Testing, Educational Assessment, Essay Tests
Tatsuoka, Kikumi; Tatsuoka, Maurice – 1979
The differences in types of information-processing skills developed by different instructional backgrounds affect, negatively or positively, the learning of further advanced instructional materials. If prior and subsequent instructional methods are different, a proactive inhibition effect produces low achievement scores on a post test. This poses…
Descriptors: Achievement Tests, Cognitive Processes, Computer Assisted Testing, Diagnostic Tests
Haksar, Lucy – Programmed Learning and Educational Technology, 1983
Describes design of an item bank for use with lower ability mathematics students in Scottish secondary schools. Aspects of bank usage discussed include raw score translation to measures on validated scales, test design for various purposes, and ways of recording and analyzing results. Test examples with corresponding outputs are given. (Author/MBR)
Descriptors: Computer Assisted Testing, Criterion Referenced Tests, Difficulty Level, Foreign Countries
Peer reviewedSireci, Stephen G.; Hambleton, Ronald K. – International Journal of Educational Research, 1997
Achievement testing in the next century is going to be very different. Computer technology is going to play a major role in test construction, test administration, scoring, and score reporting. New formats will become possible that incorporate visual and audio components and that permit adaption of tests to individual ability levels. (SLD)
Descriptors: Achievement Tests, Adaptive Testing, Computer Assisted Testing, Criterion Referenced Tests
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests

Direct link
