Publication Date
| In 2026 | 0 |
| Since 2025 | 9 |
| Since 2022 (last 5 years) | 93 |
| Since 2017 (last 10 years) | 214 |
| Since 2007 (last 20 years) | 347 |
Descriptor
| Computer Assisted Testing | 510 |
| Scoring | 510 |
| Test Items | 111 |
| Test Construction | 102 |
| Automation | 92 |
| Essays | 82 |
| Foreign Countries | 80 |
| Scores | 79 |
| Adaptive Testing | 78 |
| Evaluation Methods | 77 |
| Computer Software | 75 |
| More ▼ | |
Source
Author
| Bennett, Randy Elliot | 11 |
| Attali, Yigal | 9 |
| Anderson, Paul S. | 7 |
| Williamson, David M. | 6 |
| Bejar, Isaac I. | 5 |
| Ramineni, Chaitanya | 5 |
| Stocking, Martha L. | 5 |
| Xi, Xiaoming | 5 |
| Zechner, Klaus | 5 |
| Bridgeman, Brent | 4 |
| Davey, Tim | 4 |
| More ▼ | |
Publication Type
Education Level
Location
| Australia | 10 |
| China | 10 |
| New York | 9 |
| Japan | 7 |
| Netherlands | 6 |
| Canada | 5 |
| Germany | 5 |
| Iran | 4 |
| Taiwan | 4 |
| United Kingdom | 4 |
| United Kingdom (England) | 4 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Van Moere, Alistair; Suzuki, Masanori; Downey, Ryan; Cheng, Jian – Australian Review of Applied Linguistics, 2009
This paper discusses the development of an assessment to satisfy the International Civil Aviation Organization (ICAO) Language Proficiency Requirements. The Versant Aviation English Test utilizes speech recognition technology and a computerized testing platform, such that test administration and scoring are fully automated. Developed in…
Descriptors: Scoring, Test Construction, Language Proficiency, Standards
Georgiadou, Elissavet; Triantafillou, Evangelos; Economides, Anastasios A. – Journal of Technology, Learning, and Assessment, 2007
Since researchers acknowledged the several advantages of computerized adaptive testing (CAT) over traditional linear test administration, the issue of item exposure control has received increased attention. Due to CAT's underlying philosophy, particular items in the item pool may be presented too often and become overexposed, while other items are…
Descriptors: Adaptive Testing, Computer Assisted Testing, Scoring, Test Items
Wang, Hui-Yu; Chen, Shyi-Ming – Educational Technology & Society, 2007
In this paper, we present two new methods for evaluating students' answerscripts based on the similarity measure between vague sets. The vague marks awarded to the answers in the students' answerscripts are represented by vague sets, where each element u[subscript i] in the universe of discourse U belonging to a vague set is represented by a…
Descriptors: Artificial Intelligence, Student Evaluation, Evaluation Methods, Educational Technology
Peer reviewedRussell, G. K. G.; And Others – Journal of Clinical Psychology, 1986
A computerized version of the Minnesota Multiphasic Personality Inventory was developed that incorporated both administration and scoring. This method was compared with the original manual form. The results indicated that the test-retest reliability was high regardless of the method of administration and that similar results were obtained on the…
Descriptors: Computer Assisted Testing, Reliability, Scoring, Test Scoring Machines
Wang, Jinhao; Brown, Michelle Stallone – Contemporary Issues in Technology and Teacher Education (CITE Journal), 2008
The purpose of the current study was to analyze the relationship between automated essay scoring (AES) and human scoring in order to determine the validity and usefulness of AES for large-scale placement tests. Specifically, a correlational research design was used to examine the correlations between AES performance and human raters' performance.…
Descriptors: Scoring, Essays, Computer Assisted Testing, Sentence Structure
Luecht, Richard M. – 2001
The Microsoft Certification Program (MCP) includes many new computer-based item types, based on complex cases involving the Windows 2000 (registered) operating system. This Innovative Item Technology (IIT) has presented challenges beyond traditional psychometric considerations such as capturing and storing the relevant response data from…
Descriptors: Certification, Coding, Computer Assisted Testing, Data Collection
Papanastasiou, Elena C. – 2002
Due to the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT, from an examinees point of view, is that in many…
Descriptors: Adaptive Testing, Cheating, Computer Assisted Testing, Review (Reexamination)
Patelis, Thanos – College Entrance Examination Board, 2000
Because different types of computerized tests exist and continue to emerge, the term "computer-based testing" does not encompass all of the various models that may exist. As a result, test delivery model (TDM) is used to describe the variety of methods that exist in delivering tests to examinees. The criterion that is used to distinguish…
Descriptors: Computer Assisted Testing, Adaptive Testing, Models, Delivery Systems
Peer reviewedHuba, G. J. – Educational and Psychological Measurement, 1986
The runs test for random sequences of responding is proposed for application in long inventories with dichotomous items as an index of sterotyped responding. This index is useful for detecting whether the client shifts between response alternatives more or less frequently than would be expected by chance. (LMO)
Descriptors: Computer Assisted Testing, Personality Measures, Response Style (Tests), Scoring
Wise, Steven L. – 1999
Outside of large-scale testing programs, the computerized adaptive test (CAT) has thus far had only limited impact on measurement practice. In smaller-scale testing contexts, limited data are often available, which precludes the establishment of calibrated item pools for use by traditional (i.e., item response theory (IRT) based) CATs. This paper…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Scores
Anderson, Richard Ivan – Journal of Computer-Based Instruction, 1982
Describes confidence testing methods (confidence weighting, probabilistic marking, multiple alternative selection) as alternative to computer-based, multiple choice tests and explains potential benefits (increased reliability, improved examinee evaluation of alternatives, extended diagnostic information and remediation prescriptions, happier…
Descriptors: Computer Assisted Testing, Confidence Testing, Multiple Choice Tests, Probability
Peer reviewedMcMinn, Mark R.; Ellens, Brent M.; Soref, Erez – Assessment, 1999
Surveyed 364 members of the Society for Personality Assessment to determine how they use computer-based test interpretation software (CBTI) in their work, and their perspectives on the ethics of using CBTI. Psychologists commonly use CBTI for test scoring, but not to formulate a case or as an alternative to a written report. (SLD)
Descriptors: Behavior Patterns, Computer Assisted Testing, Computer Software, Ethics
Peer reviewedWang, LihShing; Li, Chun-Shan – Journal of Applied Measurement, 2001
Used Monte Carlo simulation to compare the relative measurement efficiency of polytomous modeling and dichotomous modeling under different scoring schemes and termination criteria. Results suggest that polytomous computerized adaptive testing (CAT) yields marginal gains over dichotomous CAT when termination criteria are more stringent. Discusses…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Monte Carlo Methods
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – ETS Research Report Series, 2008
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multitrait) rating dimensions and their relationships to holistic scores and "e-rater"® essay feature variables in the context of the TOEFL® computer-based test (CBT) writing assessment. Data analyzed in the study were analytic and holistic…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scoring
Peer reviewedDavey, Tim; And Others – Journal of Educational Measurement, 1997
The development and scoring of a recently introduced computer-based writing skills test is described. The test asks the examinee to edit a writing passage presented on a computer screen. Scoring difficulties are addressed through the combined use of option weighting and the sequential probability ratio test. (SLD)
Descriptors: Computer Assisted Testing, Educational Innovation, Probability, Scoring

Direct link
